Search Results

Search found 9332 results on 374 pages for 'an original alias'.

Page 59/374 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • I just discovered why all ASP.Net websites are slow, and I am trying to work out what to do about it

    - by James
    I just discovered that every request in an ASP.Net web application gets a Session lock at the begging of a request, and then releases it at the end of the request!!! I mean, WTF Microsoft! In case the implication is lost on you, as it was from me at first, this basically means the following: Anytime an ASP.Net webpage is taking a long time to load (maybe due to a slow database call or whatever), and the user decides they want to navigate to a different page because they are tired of waiting, THEY CANT! The ASP.Net session lock forces the new page request to wait until the original request has finished its painfully slow load. Arrrgh. Anytime an UpdatePanel is loading slowly, and the user decides to navigate to a different page before the UpdadePanel has finished updating... THEY CANT! The ASP.Net session lock forces the new page request to wait until the original request has finished its painfully slow load. Double Arrrgh! So what are the options? So far I have come up with: Implement a Custom SessionStateDataStore, which ASP.Net supports. I haven't found too many out there to copy, and it seems kind of high risk and easy to mess up. Keep track of all requests in progress, and if a request comes in from the same user, cancel the original request. Seems kind of extreme, but it would work (I think) Don't user Session! When I need some kind of state for the user, I could just user Cache instead, and key items on the authenticated user's name, or some such thing. Again seems kind of extreme I really can't believe that the ASP.Net Microsoft team would have left such a huge performance bottleneck in the framework at version 4.0! Am I missing something obvious? How hard would it be to use a ThreadSafe collection for the Session? Arrrrghhhhhh. Any advice much appreciated.

    Read the article

  • How to "upgrade" the database in real world?

    - by Tattat
    My company have develop a web application using php + mysql. The system can display a product's original price and discount price to the user. If you haven't logined, you get the original price, if you loginned , you get the discount price. It is pretty easy to understand. But my company want more features in the system, it want to display different prices base on different user. For example, user A is a golden parnter, he can get 50% off. User B is a silver parnter, only have 30 % off. But this logic is not prepare in the original system, so I need to add some attribute in the database, at least a user type in this example. Is there any recommendation on how to merge current database to my new version of database. Also, all the data should preserver, and the server should works 24/7. (within stop the database) Is it possible to do so? Also , any recommend for future maintaince advice? Thz u.

    Read the article

  • How to remove words based on a word count

    - by Chris
    Here is what I'm trying to accomplish. I have an object coming back from the database with a string description. This description can be up to 1000 characters long, but we only want to display a short view of this. So I coded up the following, but I'm having trouble in actually removing the number of words after the regular expression finds the total count of words. Does anyone have good way of dispalying the words which are less than the Regex.Matches? Thanks! if (!string.IsNullOrEmpty(myObject.Description)) { string original = myObject.Description; MatchCollection wordColl = Regex.Matches(original, @"[\S]+"); if (wordColl.Count < 70) // 70 words? { uxDescriptionDisplay.Text = string.Format("<p>{0}</p>", myObject.Description); } else { string shortendText = original.Remove(200); // 200 characters? uxDescriptionDisplay.Text = string.Format("<p>{0}</p>", shortendText); } }

    Read the article

  • Select hidden input from within next td [jQuery]

    - by Fverswijver
    I have a table layed out like this: <td> somename </td> <td class="hoverable value" > somevalue </td> <td class="changed"> </td> <td class="original value"> <input type="hidden" value="somevalue" /> </td> And what I'm trying to do is, I hover over the hoverable td which turns it into a textbox. Once I hover out I want to check the hidden field for it's original value and put an image in changed if the 2 are different from each other. I already have this: $(document).ready( function() { var newHTML = ''; $('table td.hoverable').hover( function () { var oldHTML = $(this).html().trim(); $(this).html('<input type=\'text\' value=\'' + oldHTML + '\' size=\'' + ((oldHTML).length + 2) +'\' />'); }, function() { newHTML = $('input', this).val(); var oldHTML = $(this).next('td.original').children('hidden').val(); if(newHTML != oldHTML) { $(this).next('td.changed').html('Changed'); } $(this).html(newHTML); }) }); but it doesn't work. What fails apparently is grabbing the value of the hidden field, and I've tried selecting it in several different ways but just can't get to it. Any ideas or tips are gratefully appreciated ;)

    Read the article

  • Security benefits from a second opinion, are there flaws in my plan to hash & salt user passwords vi

    - by Tchalvak
    Here is my plan, and goals: Overall Goals: Security with a certain amount of simplicity & database-to-database transferrability, 'cause I'm no expert and could mess it up and I don't want to have to ask a lot of users to reset their passwords. Easy to wipe the passwords for publishing a "wiped" databased of test data. (e.g. I'd like to be able to use a postgresql statement to simply reset all passwords to something simple so that testers can use that testing data for themselves). Plan: Hashing the passwords Account creation records the original email that an account is created with, forever. A global salt is used, e.g. "90fb16b6901dfceb73781ba4d8585f0503ac9391". An account specific salt, the original email the account was created with, is used, e.g. "[email protected]". The users's password is used, e.g. "password123" (I'll be warning against weak passwords in the signup form) The combination of the global salt, account specific salt, and password is hashed via some hashing method in postgresql (haven't been able to find documentation for hashing functions in postgresql, but being able to use sha-2 or something like that would be nice if I could find it). The hash gets stored in the database. Recovering an account To change their password, they have to go through standard password reset (and that reset email gets sent to the original email as well as the most recent account email that they have set). Flaws? Are there any flaws with this that I need to address? And are there best practices to doing hashing fully within postgresql?

    Read the article

  • What is the meaning of @ModelAttribute annotation at method argument level?

    - by beemaster
    Spring 3 reference teaches us: When you place it on a method parameter, @ModelAttribute maps a model attribute to the specific, annotated method parameter I don't understand this magic spell, because i sure that model object's alias (key value if using ModelMap as return type) passed to the View after executing of the request handler method. Therefore when request handler method executes the model object's name can't be mapped to the method parameter. To solve this contradiction i went to stackoverflow and found this detailed example. The author of example said: // The "personAttribute" model has been passed to the controller from the JSP It seems, he is charmed by Spring reference... To dispel the charms i deployed his sample app in my environment and cruelly cut @ModelAttribute annotation from method MainController.saveEdit. As result the application works without any changes! So i conclude: the @ModelAttribute annotation is not needed to pass web form's field values to the argument's fields. Then i stuck to the question: what is the mean of @ModelAttribute annotation? If the only mean is to set alias for model object in View, then why this way better than explicitly adding of object to ModelMap?

    Read the article

  • Nginx Rails app can't deploy

    - by user3596718
    I have an issue with my rails application running with passenger and nginx hosted in Ubuntu 12.04. In the nginx.conf file below, my "example.com" (Regular HTML) and "redmine.example.com" (Rails app) are working perfectly, but my "crete.example.com" (Another Rails app) is showing "502 bad gateway". I have them both hosted in /var/data with the same permissions and ownerships, also tried different ports, I can't think of something else to try. worker_processes 1; events { worker_connections 1024; } http { passenger_root /usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini; include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server{ listen 80; server_name example.com; root /opt/nginx/html; } server{ server_name redmine.example.com; root /var/data/redmine/public; passenger_enabled on; location ~ ^/<SUBURI>(/.*|$){ alias /var/data/redmine/public$1; passenger_base_uri /redmine; passenger_app_root /var/data/redmine; passenger_document_root /var/data/redmine/public; passenger_enabled on;} } server{ server_name crete.example.com; root /var/data/crete/public; passenger_enabled on; location ~ ^/<SUBURI>(/.*|$){ alias /var/data/crete/public$1; passenger_base_uri /crete; passenger_app_root /var/data/crete; passenger_document_root /var/data/crete/public; passenger_enabled on;} } } This are my Ruby and Rails versions: ruby 2.0.0p451 (2014-02-24 revision 45167) [x86_64-linux] Rails 4.1.0 My nginx error.log 2014/05/02 12:29:50 [error] 3343#0: *4 upstream prematurely closed connection while reading response header from upstream, client: xxx.xx.xx.xx, server: crete.example.com, request: "GET / HTTP/1.1", upstream: "passenger:/tmp/passenger.1.0.3 323/generation-0/request:", host: "crete.example.com" Any other conf file you might need to solve this don't hesitate to ask.

    Read the article

  • Entities used to serialize data have changed. How can the serialized data be upgraded for the new entities?

    - by i8abug
    Hi, I have a bunch of simple entity instances that I have serialized to a file. In the future, I know that the structure of these entities (ie, maybe I will rename Name to Header or something). The thing is, I don't want to lose the data that I have saved in all these old files. What is the proper way to either load the data from the old entities into new entities upgrade the old files so that they can be used with new entities Note: I think I am stuck with binary serialization, not xml serialization. Thanks in advance! Edit: So I have an answer for the case I have described. I can use a dataContractSerializer and do something like [DataMember("bar")] private string foo; and change the name in the code and keep the same name that was used for serialization. But what about the following additional cases: The original entity has new members which can be serialized Some serialized members that were in the original entity are removed Some members have actually changed in function (suppose that the original class had a FirstName and LastName member and it has been refactored to have only a FullName member which combines the two) To handle these, I need some sort of interpreter/translator deserialization class but I have no idea what I should use

    Read the article

  • UPDATE Table SET Field

    - by davlyo
    This is my Very first Post! Bear with me. I have an Update Statement that I am trying to understand how SQL Server handles it. UPDATE a SET a.vField3 = b.vField3 FROM tableName a INNER JOIN tableName b ON a.vField1 = b.vField1 AND b.nField2 = a.nField2 – 1 This is my query in its simplest form. vField1 is a Varchar nField2 is an int (autonumber) vField3 is a Varchar I have left the WHERE clause out so understand there is logic that otherwise makes this a nessessity. Say vField1 is a Customer Number and that Customer has 3 records The value in nField2 is 1, 2, and 3 consecutively. vField3 is a Status When the Update comes to a.nField2 = 1 there is no a.nField2 -1 so it continues When the Update comes to a.nField2 = 2, b.nField2 = 1 When the Update comes to a.nField2 = 3, b.nField2 = 2 So when the Update is on a.nField2 = 2, alias b reflects what is on the line prior (b.nField2 = 1) And it SETs the Varchar Value of a.vField3 = b.vField3 When the Update is on a.nField2 = 3, alias b reflects what is on the line prior (b.nField2 = 2) And it (should) SET the Varchar Value of a.vField3 = b.vField3 When the process is complete –the Second of three records looks as expected –hence the value in vField3 of the second record reflects the value in vField3 from the First record However, vField3 of the Third record does not reflect the value in vField3 from the Second record. I think this demonstrates that SQL Server may be producing a transaction of some sort and then an update. Question: How can I get the DB to Update after each transaction so I can reference the values generated by each transaction?

    Read the article

  • Simple javascript/jquery append, remove question

    - by Scarface
    Hey guys, quick question. I have a div that on click will execute a jquery function onclick, and then replace the div with its opposite div which onclick will replace back to the first div just like the favorite function on this site. I am having a simple problem I hope someone can spot or explain to me it is not possible. The function works with the initial two divs that are served directly, but when I click on either of the original two divs, and the function executes and replaces the div with the corresponding div the replacement div does not execute the function. I have to refresh the page to get the original div, which is identical to execute the function. Is the div actually replaced on the page with append, or does it just visually show it? Any advice appreciated. original div <div class="unfavorite"><img id="unfavorite_img" src="images/favorite2.png" /></div> javascript div replacement $(".favorite").remove(); $(".favoriteholder").append('<div class="unfavorite"><img id="unfavorite_img" src="images/favorite2.png" /></div>');

    Read the article

  • NHibernate Native SQL multiple joins

    - by Chris
    Hi all, I"m having some problems with Nhibernate and native sql. I've got an entity with alot of collections and I am doing an SQL Fulltext search on it. So when returning 100 or so entities, I dont want all collections be lazy loaded. For this I changed my SQL query: SELECT Query.* FROM (SELECT {spr.*}, {adr.*}, {adrt.*}, {cty.*}, {com.*}, {comt.*}, spft.[Rank] AS [Rak], Row_number() OVER(ORDER BY spft.[Rank] DESC) AS rownum FROM customer spr INNER JOIN CONTAINSTABLE ( customerfulltext , computedfulltextindex , '" + parsedSearchTerm + @"' ) AS spft ON spr.customerid = spft.[Key] LEFT JOIN [Address] adr ON adr.customerid = spr.customerid INNER JOIN [AddressType] adrt ON adrt.addresstypeid = adr.addresstypeid INNER JOIN [City] cty ON cty.cityid = adr.cityid LEFT JOIN [Communication] com ON com.customerid = spr.customerid INNER JOIN [CommunicationType] comt ON comt.communicationtypeid = com.communicationtypeid) as Query ORDER BY Query.[Rank] DESC This is how I setup the query: var items = GetCurrentSession() .CreateSQLQuery(query) .AddEntity("spr", typeof(Customer)) .AddJoin("adr", "spr.addresses") .AddJoin("adrt", "adr.Type") .AddJoin("cty", "adr.City") .AddJoin("com", "spr.communicationItems") .AddJoin("comt", "com.Type") .List<Customer>(); What happens now is, that the query returns customers twice (or more), I assume this is because of the joins since for each customer address, communicationItem (e.g. phone, email), a new sql row is returned. In this case I thought I could use the DistinctRootEntityResultTransformer. var items = GetCurrentSession() .CreateSQLQuery(query) .AddEntity("spr", typeof(Customer)) .AddJoin("adr", "spr.addresses") .AddJoin("adrt", "adr.Type") .AddJoin("cty", "adr.City") .AddJoin("com", "spr.communicationItems") .AddJoin("comt", "com.Type") .SetResultTransformer(new DistinctRootEntityResultTransformer()) .List<Customer>(); Doing so an exception is thrown. This is because I try to list customers .List<Customer>() but the transformer returns only entities of the last join added. E.g. in the case above, the entity with alias "comt" is returned when doing .List() instead of .List(). If I would switch last join with the join alias "cty", then the transformer returns a list of cities only... Anyone knows how I can return a clean list of customers in this case?

    Read the article

  • What is the mean of @ModelAttribute annotation at method argument level?

    - by beemaster
    Spring 3 reference teaches us: When you place it on a method parameter, @ModelAttribute maps a model attribute to the specific, annotated method parameter I don't understand this magic spell, because i sure that model object's alias (key value if using ModelMap as return type) passed to the View after executing of the request handler method. Therefore when request handler method executes the model object's name can't be mapped to the method parameter. To solve this contradiction i went to stackoverflow and found this detailed example. The author of example said: // The "personAttribute" model has been passed to the controller from the JSP It seems, he is charmed by Spring reference... To dispel the charms i deployed his sample app in my environment and cruelly cut @ModelAttribute annotation from method MainController.saveEdit. As result the application works without any changes! So i conclude: the @ModelAttribute annotation is not needed to pass web form's field values to the argument's fields. Then i stuck to the question: what is the mean of @ModelAttribute annotation? If the only mean is to set alias for model object in View, then why this way better than explicitly adding of object to ModelMap?

    Read the article

  • Excel error "This workbook contains Excel 4.0 macros or Excel 5.0 modules"

    - by James
    I have a workbook that was protected via the Protect Workbook feature. It was sent to someone else to modify. When they sent it back, it was unprotected and when I try to reprotect it I get this error, "This workbook contains Excel 4.0 macros or Excel 5.0 modules. If you would like to password protect or restrict permission to this document, you need to remove these macros." I looked and there are no new macros in the edited file. The original file contained the same macros and it was able to be write protected, so I'm not sure why the modified file is having a problem. What are common causes and solutions for this error and does it make sense for the modified file to have the error when the original doesn't?

    Read the article

  • nagios NRPE: Unable to read output

    - by user555854
    I currently set up a script to restart my http servers + php5 fpm but can't get it to work. I have googled and have found that mostly permissions are the problems of my error but can't figure it out. I start my script using /usr/lib/nagios/plugins/check_nrpe -H bart -c restart_http This is the output in my syslog on the node I want to restart Jun 27 06:29:35 bart nrpe[8926]: Connection from 192.168.133.17 port 25028 Jun 27 06:29:35 bart nrpe[8926]: Host address is in allowed_hosts Jun 27 06:29:35 bart nrpe[8926]: Handling the connection... Jun 27 06:29:35 bart nrpe[8926]: Host is asking for command 'restart_http' to be run... Jun 27 06:29:35 bart nrpe[8926]: Running command: /usr/bin/sudo /usr/lib/nagios/plugins/http-restart Jun 27 06:29:35 bart nrpe[8926]: Command completed with return code 1 and output: Jun 27 06:29:35 bart nrpe[8926]: Return Code: 1, Output: NRPE: Unable to read output Jun 27 06:29:35 bart nrpe[8926]: Connection from 192.168.133.17 closed. If I run the command myself it runs fine (but asks for a password) (nagios user) This are the script permission and the script contents. -rwxrwxrwx 1 nagios nagios 142 Jun 26 21:41 /usr/lib/nagios/plugins/http-restart #!/bin/bash echo "ok" /etc/init.d/nginx stop /etc/init.d/nginx start /etc/init.d/php5-fpm stop /etc/init.d/php5-fpm start echo "done" I also added this line to visudo nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ My local nagios nrpe.cfg ############################################################################# # Sample NRPE Config File # Written by: Ethan Galstad ([email protected]) # # # NOTES: # This is a sample configuration file for the NRPE daemon. It needs to be # located on the remote host that is running the NRPE daemon, not the host # from which the check_nrpe client is being executed. ############################################################################# # LOG FACILITY # The syslog facility that should be used for logging purposes. log_facility=daemon # PID FILE # The name of the file in which the NRPE daemon should write it's process ID # number. The file is only written if the NRPE daemon is started by the root # user and is running in standalone mode. pid_file=/var/run/nagios/nrpe.pid # PORT NUMBER # Port number we should wait for connections on. # NOTE: This must be a non-priviledged port (i.e. > 1024). # NOTE: This option is ignored if NRPE is running under either inetd or xinetd server_port=5666 # SERVER ADDRESS # Address that nrpe should bind to in case there are more than one interface # and you do not want nrpe to bind on all interfaces. # NOTE: This option is ignored if NRPE is running under either inetd or xinetd #server_address=127.0.0.1 # NRPE USER # This determines the effective user that the NRPE daemon should run as. # You can either supply a username or a UID. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd nrpe_user=nagios # NRPE GROUP # This determines the effective group that the NRPE daemon should run as. # You can either supply a group name or a GID. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd nrpe_group=nagios # ALLOWED HOST ADDRESSES # This is an optional comma-delimited list of IP address or hostnames # that are allowed to talk to the NRPE daemon. # # Note: The daemon only does rudimentary checking of the client's IP # address. I would highly recommend adding entries in your /etc/hosts.allow # file to allow only the specified host to connect to the port # you are running this daemon on. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd allowed_hosts=127.0.0.1,192.168.133.17 # COMMAND ARGUMENT PROCESSING # This option determines whether or not the NRPE daemon will allow clients # to specify arguments to commands that are executed. This option only works # if the daemon was configured with the --enable-command-args configure script # option. # # *** ENABLING THIS OPTION IS A SECURITY RISK! *** # Read the SECURITY file for information on some of the security implications # of enabling this variable. # # Values: 0=do not allow arguments, 1=allow command arguments dont_blame_nrpe=0 # COMMAND PREFIX # This option allows you to prefix all commands with a user-defined string. # A space is automatically added between the specified prefix string and the # command line from the command definition. # # *** THIS EXAMPLE MAY POSE A POTENTIAL SECURITY RISK, SO USE WITH CAUTION! *** # Usage scenario: # Execute restricted commmands using sudo. For this to work, you need to add # the nagios user to your /etc/sudoers. An example entry for alllowing # execution of the plugins from might be: # # nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ # # This lets the nagios user run all commands in that directory (and only them) # without asking for a password. If you do this, make sure you don't give # random users write access to that directory or its contents! command_prefix=/usr/bin/sudo # DEBUGGING OPTION # This option determines whether or not debugging messages are logged to the # syslog facility. # Values: 0=debugging off, 1=debugging on debug=1 # COMMAND TIMEOUT # This specifies the maximum number of seconds that the NRPE daemon will # allow plugins to finish executing before killing them off. command_timeout=60 # CONNECTION TIMEOUT # This specifies the maximum number of seconds that the NRPE daemon will # wait for a connection to be established before exiting. This is sometimes # seen where a network problem stops the SSL being established even though # all network sessions are connected. This causes the nrpe daemons to # accumulate, eating system resources. Do not set this too low. connection_timeout=300 # WEEK RANDOM SEED OPTION # This directive allows you to use SSL even if your system does not have # a /dev/random or /dev/urandom (on purpose or because the necessary patches # were not applied). The random number generator will be seeded from a file # which is either a file pointed to by the environment valiable $RANDFILE # or $HOME/.rnd. If neither exists, the pseudo random number generator will # be initialized and a warning will be issued. # Values: 0=only seed from /dev/[u]random, 1=also seed from weak randomness #allow_weak_random_seed=1 # INCLUDE CONFIG FILE # This directive allows you to include definitions from an external config file. #include=<somefile.cfg> # INCLUDE CONFIG DIRECTORY # This directive allows you to include definitions from config files (with a # .cfg extension) in one or more directories (with recursion). #include_dir=<somedirectory> #include_dir=<someotherdirectory> # COMMAND DEFINITIONS # Command definitions that this daemon will run. Definitions # are in the following format: # # command[<command_name>]=<command_line> # # When the daemon receives a request to return the results of <command_name> # it will execute the command specified by the <command_line> argument. # # Unlike Nagios, the command line cannot contain macros - it must be # typed exactly as it should be executed. # # Note: Any plugins that are used in the command lines must reside # on the machine that this daemon is running on! The examples below # assume that you have plugins installed in a /usr/local/nagios/libexec # directory. Also note that you will have to modify the definitions below # to match the argument format the plugins expect. Remember, these are # examples only! # The following examples use hardcoded command arguments... command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10 command[check_load]=/usr/lib/nagios/plugins/check_load -w 15,10,5 -c 30,25,20 command[check_hda1]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p /dev/hda1 command[check_zombie_procs]=/usr/lib/nagios/plugins/check_procs -w 5 -c 10 -s Z command[check_total_procs]=/usr/lib/nagios/plugins/check_procs -w 150 -c 200 # The following examples allow user-supplied arguments and can # only be used if the NRPE daemon was compiled with support for # command arguments *AND* the dont_blame_nrpe directive in this # config file is set to '1'. This poses a potential security risk, so # make sure you read the SECURITY file before doing this. #command[check_users]=/usr/lib/nagios/plugins/check_users -w $ARG1$ -c $ARG2$ #command[check_load]=/usr/lib/nagios/plugins/check_load -w $ARG1$ -c $ARG2$ #command[check_disk]=/usr/lib/nagios/plugins/check_disk -w $ARG1$ -c $ARG2$ -p $ARG3$ #command[check_procs]=/usr/lib/nagios/plugins/check_procs -w $ARG1$ -c $ARG2$ -s $ARG3$ command[restart_http]=/usr/lib/nagios/plugins/http-restart # # local configuration: # if you'd prefer, you can instead place directives here include=/etc/nagios/nrpe_local.cfg # # you can place your config snipplets into nrpe.d/ include_dir=/etc/nagios/nrpe.d/ My Sudoers files # /etc/sudoers # # This file MUST be edited with the 'visudo' command as root. # # See the man page for details on how to write a sudoers file. # Defaults env_reset # Host alias specification # User alias specification # Cmnd alias specification # User privilege specification root ALL=(ALL) ALL nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ # Allow members of group sudo to execute any command # (Note that later entries override this, so you might need to move # it further down) %sudo ALL=(ALL) ALL # #includedir /etc/sudoers.d Hopefully someone can help!

    Read the article

  • nginx macro like apache's mod_macro

    - by karmic
    Is there a way to use 'macros' in nginx For example for apache I can do something like <Macro myMacro> <VirtualHost> ServerName $blah Alias www.$blah Log /var/log/apache2/$blah </VirtualHost> </Macro> and then use it for many hosts like: Use myMacro hello.com Use myMacro hi.com Is there a way to do something similar in nginx?

    Read the article

  • 503 Service Unavailable - What really it means?

    - by pandiya chendur
    Possible Dup: http://stackoverflow.com/questions/2529244/503-service-unavailable-what-really-it-means I am asking on behalf of original question poster because we both work in the same place... I developed a website and it loads in every other system but certainly not in mine ... WHen i used firebug my request show 503 Service Unavailable Firebug response header showed, Server squid/2.6.STABLE21 Date Sat, 27 Mar 2010 12:25:18 GMT Content-Type text/html Content-Length 1163 Expires Sat, 27 Mar 2010 12:25:18 GMT X-Squid-Error ERR_DNS_FAIL 0 X-Cache MISS from xavy X-Cache-Lookup MISS from xavy:3128 Via 1.0 xavy:3128 (squid/2.6.STABLE21) Proxy-Connection close For REF: please visit the original question and look at the answers and comments and help us out..

    Read the article

  • Installing OpenSSL that supports SNI along with previous version of OpenSSL

    - by gh0sT
    So I learned that to host multiple HTTPS websites on the same IP address you need an OpenSSL version that supports SNI (0.9.8f and higher). My RHEL5 box currently has 0.9.8e and Apache version httpd-2.2.26-2.el5. According to a same question here it's not a good idea to replace the original version of OpenSSL and instead to have a parallel installation. It however doesn't explicitly mention how to achieve this. So my questions are: How do I have an alternate installation of OpenSSL without breaking the system? How do I make Apache to use this version of OpenSSL and not the original one? A detailed guide would be extremely helpful.

    Read the article

  • Disk performance below expectations

    - by paulH
    this is a follow-up to a previous question that I asked (Two servers with inconsistent disk speed). I have a PowerEdge R510 server with a PERC H700 integrated RAID controller (call this Server B) that was built using eight disks with 3Gb/s bandwidth that I was comparing with an almost identical server (call this Server A) that was built using four disks with 6Gb/s bandwidth. Server A had much better I/O rates than Server B. Once I discovered the difference with the disks, I had Server A rebuilt with faster 6Gbps disks. Unfortunately this resulted in no increase in the performance of the disks. Expecting that there must be some other configuration difference between the servers, we took the 6Gbps disks out of Server A and put them in Server B. This also resulted in no increase in the performance of the disks. We now have two identical servers built, with the exception that one is built with six 6Gbps disks and the other with eight 3Gbps disks, and the I/O rates of the disks is pretty much identical. This suggests that there is some bottleneck other than the disks, but I cannot understand how Server B originally had better I/O that has subsequently been 'lost'. Comparative I/O information below, as measured by SQLIO. The same parameters were used for each test. It's not the actual numbers that are significant but rather the variations between systems. In each case D: is a 2 disk RAID 1 volume, and E: is a 4 disk RAID 10 volume (apart from the original Server A, where E: was a 2 disk RAID 0 volume). Server A (original setup with 6Gpbs disks) D: Read (MB/s) 63 MB/s D: Write (MB/s) 170 MB/s E: Read (MB/s) 68 MB/s E: Write (MB/s) 320 MB/s Server B (original setup with 3Gpbs disks) D: Read (MB/s) 52 MB/s D: Write (MB/s) 88 MB/s E: Read (MB/s) 112 MB/s E: Write (MB/s) 130 MB/s Server A (new setup with 3Gpbs disks) D: Read (MB/s) 55 MB/s D: Write (MB/s) 85 MB/s E: Read (MB/s) 67 MB/s E: Write (MB/s) 180 MB/s Server B (new setup with 6Gpbs disks) D: Read (MB/s) 61 MB/s D: Write (MB/s) 95 MB/s E: Read (MB/s) 69 MB/s E: Write (MB/s) 180 MB/s Can anybody suggest any ideas what is going on here? The drives in use are as follows: Dell Seagate F617N ST3300657SS 300GB 15K RPM SAS Dell Hitachi HUS156030VLS600 300GB 3.5 inch 15000rpm 6GB SAS Hitachi Hus153030vls300 300GB Server SAS Dell ST3146855SS Seagate 3.5 inch 146GB 15K SAS

    Read the article

  • multiply websites and different websites on the same iis server

    - by Krystian
    I have: Windows 2003 Server providing DNS. Server is hosting siteA.com I want to then add: Another website, siteB.com So far, I've created siteB.com on the server, but I'm unsure about how to correctly configure DNS for it. The public DNS administrator has mapped siteB.com as an alias of siteA.com. However, I don't know how to configure my local DNS server accordingly. I tried adding a local record for siteB, however it's shown up as siteB.SiteA.com Please help.

    Read the article

  • Keyboard Shortcuts in Win 7 without the CTRL + ALT

    - by Carlos
    I am knew to this site and don't know if I'm doing this correctly. I've been asked to edit my original post so I deleted my original post and starting over. I don't know why it's so hard for everyone to understand what I'm trying to do. You guys are all geniuses when it comes to computers and I'm just starting out. I started out trying to use a shortcut to display the LOCAL AREA CONNECTION window on my desktop by creating a shortcut and assigning it CTRL + , (comma). Windows didn't like that so it added ALT which ended up being CTRL + ALT + ,. Since I couldn't figure out a way to eliminate ALT as part of the shortkey keys, I am now trying a different strategy and it's not working. my latest attempt is to run the following command; ^,:: Run, explorer:: {BA126ADB-2166-11D1-B1D0-00805FC1270E} Can someone please tell me what I'm doing wrong? I'm trying, just give me a chance. Thanks, Carlos

    Read the article

  • Windows 7 won't recognize backup set can I script extracting the files in some other way?

    - by datatoo
    The Windows 7 Backup/Restore created multiple backup sets and I was able to restore the oldest version, but not the most recent, which is not seen by the application. I do see all of the zip files and there are hundreds in later versions. Is there a way to extract each of these correctly outside of the regular restoration method? Perhaps scripting an extract of each day one after another? further clarifying The backup files were all made to an external drive. The original computer died completely, power supply, drives everything. I am trying to reconstruct as much as possible and the only backup set recognized is 6 months older. This was recovered over a new install, but unzipping thousands of zip files is not really a simple unzip copy project as the original paths are not a simple thing to reconstruct.

    Read the article

  • Can a usb cable carry 12v?

    - by zm15
    Here's what i'm wanting to do. I have a Acer Iconia A500 tablet. I want to plug it in, in the car, but it has a barrel plug and I don't want to buy an inverter. The car adapters are expensive for what they do. I already have a 2.1 amp usb car charger meant for the iPad: http://www.amazon.com/Kensington-K33497US-PowerBolt-Charger-Compatible/dp/tech-data/B003PU01M4/ref=de_a_smtd And i want to use this usb cable from the 2.1 amp port to plug into the A500: http://www.amazon.com/gp/product/B00304DZ7I/ref=ox_sc_act_title_2?ie=UTF8&m=A1HPBDJJIXKXS7 Here are the specs on the original wall charger if that helps: http://www.phihong.com/assets/pdf/PSA18R.pdf The usb cable says it's 5v, but the original charger says it outputs 12v. But since it's just a cable... wasn't sure if that really made a whole lot of difference since it's only 1.5 amps from the wall charger. Is it possible to use that usb cable through the powerbolt car charger, to charge the A500?

    Read the article

  • Automating the choice between JPEG and PNG with a script

    - by MHC
    Choosing the right format to save your images in is crucial for preserving image quality and reducing artifacts. Different formats follow different compression methods and come with their own set of advantages and disadvantages. JPG, for instance is suited for real life photographs that are rich in color gradients. The lossless PNG, on the other hand, is far superior when it comes to schematic figures: Picking the right format can be a chore when working with a large number of files. That's why I would love to find a way to automate it. A little bit of background on my particular use case: I am working on a number of handouts for a series of lectures at my unversity. The handouts are rich in figures, which I have to extract from PDF-formatted slides. Extracting these images gives me lossless PNGs, which are needlessly large at times. Converting these particular files to JPEG can reduce their size to up to less than 20% of their original file size, while maintaining the same quality. This is important as working with hundreds of large images in word processors is pretty crash-prone. Batch converting all extracted PNGs to JPEGs is not an option I am willing to follow, as many if not most images are better suited to be formatted as PNGs. Converting these would result in insignificant size reductions and sometimes even increases in filesize - that's at least what my test runs showed. What we can take from this is that file size after compression can serve as an indicator on what format is suited best for a particular image. It's not a particularly accurate predictor, but works well enough. So why not use it in form of a script: I included inotifywait because I would prefer for the script be executed automatically as soon as I drag an extracted image into a folder. This is a simpler version of the script that I've been using for the last couple of weeks: #!/bin/bash inotifywait -m --format "%w%f" --exclude '.jpg' -r -e create -e moved_to --fromfile '/home/MHC/.scripts/Workflow/Conversion/include_inotifywait' | while read file; do mogrify -format jpg -quality 92 "$file" done The advanced version of the script would have to be able to handle spaces in file names and directory names preserve the original file names flatten PNG images if an alpha value is set compare the file size between the temporary converted image and its original determine if the difference is greater than a given precentage act accordingly The actual conversion could be done with imagemagick tools: convert -quality 92 -flatten -background white file.png file.jpg Unfortunately, my bash skills aren't even close to advanced enough to convert the scheme above into an actual script, but I am sure many of you can. My reputation points on here are pretty low, but I will gladly award the most helpful answer with the highest bounty I can set. References: http://www.formortals.com/introducing-cnb-imageguide/, http://www.turnkeylinux.org/blog/png-vs-jpg Edit: Also see my comments below for some more information on why I think this script would be the best solution to the problem I am facing.

    Read the article

  • Indirect Postfix bounces create new user directories

    - by hheimbuerger
    I'm running Postfix on my personal server in a data centre. I am not a professional mail hoster and not a Postfix expert, it is just used for a few domains served from that server. IIRC, I mostly followed this howto when setting up Postfix. Mails addressed to one of the domains the server manages are delivered locally (/srv/mail) to be fetched with Dovecot. Mails to other domains require usage of SMTPS. The mailbox configuration is stored in MySQL. The problem I have is that I suddenly found new mailboxes being created on the disk. Let's say I have the domain 'example.com'. Then I would have lots of new directories, e.g. /srv/mail/example.com/abenaackart /srv/mail/example.com/abenaacton etc. There are no entries for these addresses in my database, neither as a mailbox nor as an alias. It's clearly spam from auto-generated names. Most of them start with 'a', a few with 'b' and a couple of random ones with other letters. At first I was afraid of an attack, but all security restrictions seem to work. If I try to send mail to these addresses, I get an "Recipient address rejected: User unknown in virtual mailbox table" during the 'RCPT TO' stage. So I looked into the mails stored in these mailboxes. Turns out that all of them are bounces. It seems like all of them were sent from a randomly generated name to an alias that really exists on my system, but pointed to an invalid destination address on another host. So Postfix accepted it, then tried to redirect it to another mail server, which rejected it. This bounced back to my Postfix server, which now took the bounce and stored it locally -- because it seemed to be originating from one of the addresses it manages. Example: My Postfix server handles the example.com domain. [email protected] is configured to redirect to [email protected]. [email protected] has since been deleted from the Hotmail servers. Spammer sends mail with FROM:[email protected] and TO:[email protected]. My Postfix server accepts the mail and tries to hand it off to hotmail.com. hotmail.com sends a bounce back. My Postfix server accepts the bounce and delivers it to /srv/mail/example.com/bob. The last step is what I don't want. I'm not quite sure what it should do instead, but creating hundreds of new mailboxes on my disk is not what I want... Any ideas how to get rid of this behaviour? I'll happily post parts of my configuration, but I'm not really sure where to start debugging the problem at this point.

    Read the article

  • Disable "longhaul" kernel module with a GRUB command?

    - by Julian Schweigert
    I've got a problem with a VIA C3 (1GHz)-system: the system freezes immediately when the CPU frequency goes under 731MHz because of an incompatibility with the (not completely implemented) i686 commands and a powersave feature of the kernel. There is a workaround: deactivate the "longhaul" kernelmodule via alias longhaul off in /etc/modprobe.d/aliases. But the system freezes before I can install any Linux distribution - even Clonezilla freezes. Is there a possibility to deactivate the module with a GRUB boot parameter before the kernel is loaded?

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >