Search Results

Search found 2108 results on 85 pages for 'differences'.

Page 74/85 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • optimization math computation (multiplication and summing)

    - by wiso
    Suppose you want to compute the sum of the square of the differences of items: $\sum_{i=1}^{N-1} (x_i - x_{i+1})^2$, the simplest code (the input is std::vector<double> xs, the ouput sum2) is: double sum2 = 0.; double prev = xs[0]; for (vector::const_iterator i = xs.begin() + 1; i != xs.end(); ++i) { sum2 += (prev - (*i)) * (prev - (*i)); // only 1 - with compiler optimization prev = (*i); } I hope that the compiler do the optimization in the comment above. If N is the length of xs you have N-1 multiplications and 2N-3 sums (sums means + or -). Now suppose you know this variable: sum = $x_1^2 + x_N^2 + 2 sum_{i=2}^{N-1} x_i^2$ Expanding the binomial square: $sum_i^{N-1} (x_i-x_{i+1})^2 = sum - 2\sum_{i=1}^{N-1} x_i x_{i+1}$ so the code becomes: double sum2 = 0.; double prev = xs[0]; for (vector::const_iterator i = xs.begin() + 1; i != xs.end(); ++i) { sum2 += (*i) * prev; prev = (*i); } sum2 = -sum2 * 2. + sum; Here I have N multiplications and N-1 additions. In my case N is about 100. Well, compiling with g++ -O2 I got no speed up (I try calling the inlined function 2M times), why?

    Read the article

  • __declspec(dllimport) causes compiler crash on MSVC 2010

    - by Zero
    In a *.cpp file, trying to use a third party lib: #define DLL_IMPORT #include <thirdParty.h> // Third party header has code like: // #ifdef DLL_IMPORT // #define DLL_DECL __declspec(dllimport) // fatal error C1001: An internal error has occurred in the compiler. Alternative: #define NO_DLL #include <thirdParty.h> // Third party header has code like: // #elif defined(NO_DLL) // #define DLL_DECL // Compiles fine, but linker errors as can't find DLL functions // I can reproduce results by remove macros and #define all together and manually editing the third party files to have __declspec(dllimport) or not Has anyone come across anything similar, or can hint at the cause? (which is created using CMake). Above is actual example of 2 line *.cpp that crashes so it's narrowed down to something in the #include. The following also work fine: Compile the examples provided by the third party (they provide a *.sln) that use dllimport/export so it doesn't appear to be the fault of the library Compile the third party lib as part of the production project (so dllexport works fine) I've trawled the project settings pages of the two projects to try and spot differences, but have come up blank. Of course, it's possible I'm missing something as those settings pages are not the easiest to navigate. I'll get access to VS2008 in a day or so, so can compare with that. The third party library is MySql++.

    Read the article

  • Best ways to format LINQ queries.

    - by Aren B
    Before you ignore / vote-to-close this question, I consider this a valid question to ask because code clarity is an important topic of discussion, it's essential to writing maintainable code and I would greatly appreciate answers from those who have come across this before. I've recently run into this problem, LINQ queries can get pretty nasty real quick because of the large amount of nesting. Below are some examples of the differences in formatting that I've come up with (for the same relatively non-complex query) No Formatting var allInventory = system.InventorySources.Select(src => new { Inventory = src.Value.GetInventory(product.OriginalProductId, true), Region = src.Value.Region }).GroupBy(i => i.Region, i => i.Inventory); Elevated Formatting var allInventory = system.InventorySources .Select(src => new { Inventory = src.Value.GetInventory(product.OriginalProductId, true), Region = src.Value.Region }) .GroupBy( i => i.Region, i => i.Inventory); Block Formatting var allInventory = system.InventorySources .Select( src => new { Inventory = src.Value.GetInventory(product.OriginalProductId, true), Region = src.Value.Region }) .GroupBy( i => i.Region, i => i.Inventory ); List Formatting var allInventory = system.InventorySources .Select(src => new { Inventory = src.Value.GetInventory(product.OriginalProductId, true), Region = src.Value.Region }) .GroupBy(i => i.Region, i => i.Inventory); I want to come up with a standard for linq formatting so that it maximizes readability & understanding and looks clean and professional. So far I can't decide so I turn the question to the professionals here.

    Read the article

  • How to set title and class in the HTML for the options of a ModelChoiceField?

    - by celopes
    I have a model class MyModel(models.Model): name = models.CharField(max_length=80, unique=True) parent = models.ForeignKey('self', null=True, blank=True) I want to render a ModelChoiceField for that model that looks like: <select name="mymodel" id="id_mymodel"> <option value="1" title="Value 1" class="">Value 1</option> <option value="2" title="Value 2" class="Value 1">Value 2</option> </select> The differences between this output and the default output for a ModelChoiceField are the title and class elements in the OPTION tag. They don't exist in ModelChoiceField's default output. For my purposes: The title element is supposed to be the Option Name. The class element is supposed to be self.parent.name. (this is my problem) So, in the HTML snippet above, Value 1 has no parent and Value 2 has a parent of Value 1. What is the best mechanism to change ModelChoiceField's default HTML output? EDIT: I understand how to create a new Widget for rendering HTML. The problem is how to render a value from the underlying model in each of the options.

    Read the article

  • Extending abstract classes in c#

    - by ng
    I am a Java developer and I have noticed some differences in extending abstract classes in c# as opposed to Java. I was wondering how a c# developer would achived the following. 1) Covarience public abstract class A { public abstract List<B> List(); } public class BList : List<T> where T : B { } public abstract class C : A { public abstract BList List(); } So in the above hierarchy, there is covarience in C where it returns a type compatible with what A returns. However this gives me an error in Visual Studio. Is there a way to specify a covarient return type in c#? 2) Adding a setter to a property public abstract class A { public abstract String Name { get; } } public abstract class B : A { public abstract String Name { get; set } } Here the compiler complains of hiding. Any suggestions? Please do not suggest using interfaces unless that is the ONLY way to do this.

    Read the article

  • PHP shell_exec() - Run directly, or perform a cron (bash/php) and include MySQL layer?

    - by Jimbo
    Sorry if the title is vague - I wasn't quite sure how to word it! What I'm Doing I'm running a Linux command to output data into a variable, parse the data, and output it as an array. Array values will be displayed on a page using PHP, and this PHP page output is requested via AJAX every 10 seconds so, in effect, the data will be retrieved and displayed/updated every 10 seconds. There could be as many as 10,000 characters being parsed on every request, although this is usually much lower. Alternative Idea I want to know if there is a better* alternative method of retrieving this data every 10 seconds, as multiple users (<10) will be having this command executed automatically for them. A cronjob running on the server could execute either bash or php (which is faster?) to grab the data and store it in a MySQL database. Then, any AJAX calls to the PHP output would return values in the MySQL database rather than making a direct call to execute server code every 10 seconds. Why? I know there are security concerns with running execs directly from PHP, and (I hope this isn't micro-optimisation) I'm worried about CPU usage on the server. The server is running a sempron processor. Yes, they do still exist. Having this only execute when the user is on the page (idea #1) means that the server isn't running code that doesn't need to be run. However, is this slow and insecure? Just in case the type of linux command may be of assistance in determining it's efficiency: shell_exec("transmission-remote $host:$port --auth $username:$password -l"); I'm hoping that there are differences in efficiency and level of security with the two methods I have outlined above, and that this isn't just micro-micro-optimisation. If there are alternative methods that are better*, I'd love to learn about these! :)

    Read the article

  • Dojo's nested BorderContainer disappear in IE

    - by h4b0
    I have this terrible problem with IE 7/8/9. I wrote an app using Dojo toolkit 1.8.0 and Play! framework. It works fine in all browser except for IE. Its 'developers tools' show no error, so does firebug. The problematic code section is here: <div data-dojo-type="dijit.layout.BorderContainer" data-dojo-props="design: 'headline'"> <div data-dojo-type="dijit.layout.ContentPane" id="head" region="top"> </div> <div data-dojo-type="dijit.layout.BorderContainer" data-dojo-props="region: 'center'"> <div data-dojo-type="dijit.layout.ContentPane" id="menu" region="left"> </div> <div data-dojo-type="dijit.layout.BorderContainer" data-dojo-props="region: 'center'"> <div data-dojo-type="dijit.layout.ContentPane" id="content_1" region="top"> </div> <div data-dojo-type="dijit.layout.ContentPane" id="content_2" region="bottom"> </div> </div> </div> <div data-dojo-type="dijit.layout.ContentPane" id="foot" region="bottom"> </div> </div> The result, in all browsers except for IE is like that: But in IE it is shown like that: Can anyone explain why there are such differences? At first I thought that in IE content is hidden, so I set overflow: auto, but no scrollbar appeared after page load.

    Read the article

  • How do browser cookie domains work?

    - by Vilx-
    Due to weird domain/subdomain cookie issues that I'm getting, I'd like to know how browsers handle cookies. If they do it in different ways, it would also be nice to know the differences. In other words - when a browser receives a cookie, that cookie MAY have a domain and a path attached to it. Or not, in which case the browser probably substitutes some defaults for them. Question 1: what are they? Later, when the browser is about to make a request, it checks its cookies and filters out the ones it should send for that request. It does so by matching them against the requests path and domain. Question 2: what are the matching rules? Added: The reason I'm asking this is because I'm interested in some edge cases. Like: Will a cookie for .example.com be available for www.example.com? Will a cookie for .example.com be available for example.com? Will a cookie for example.com be available for www.example.com? Will a cookie for example.com be available for anotherexample.com? Will www.example.com be able to set cookie for example.com? Will www.example.com be able to set cookie for www2.example.com? Will www.example.com be able to set cookie for .com? Etc. Added 2: Also, could someone suggest how I should set a cookie so that: It can be set by either www.example.com or example.com; It is accessible by both www.example.com and example.com.

    Read the article

  • How to correctly pass a float from C# to C++ (dll)

    - by RavelT
    I'm getting huge differences when I pass a float from C# to C++. I'm passing a dynamic float wich changes over time. With a debuger I get this: c++ lonVel -0.036019072 float c# lonVel -0.029392920 float I did set my MSVC++2010 floating point model to /fp:fast wich should be the standard in .NET if I'm not mistaken, but this didnt help. Now I cant give out the code but I can show a fraction of it. From C# side it looks like this: namespace Example { public class Wheel { public bool loging = true; #region Members public IntPtr nativeWheelObject; #endregion Members public Wheel() { this.nativeWheelObject = Sim.Dll_Wheel_Add(); return; } #region Wrapper methods public void SetVelocity(float lonRoadVelocity,float latRoadVelocity){Sim.Dll_Wheel_SetVelocity(this.nativeWheelObject,lonRoadVelocity,latRoadVelocity);} #endregion Wrapper methods } internal class Sim { #region PInvokes [DllImport(pluginName, CallingConvention=CallingConvention.Cdecl)] public static extern void Dll_Wheel_SetVelocity(IntPtr wheel,float lonRoadVelocity,float latRoadVelocity); #endregion PInvokes } } And in C++ side @ exportFunctions.cpp: EXPORT_API void Dll_Wheel_SetVelocity(CarWheel* wheel,float lonRoadVelocity,float latRoadVelocity){ wheel->SetVelocity(lonRoadVelocity,latRoadVelocity);} So any sugestions on what I should do in order to get 1:1 results or atleast 99% correct results.

    Read the article

  • Why won't my code segfault on Windows 7?

    - by Trevor
    This is an unusual question to ask but here goes: In my code, I accidentally dereference NULL somewhere. But instead of the application crashing with a segfault, it seems to stop execution of the current function and just return control back to the UI. This makes debugging difficult because I would normally like to be alerted to the crash so I can attach a debugger. What could be causing this? Specifically, my code is an ODBC Driver (ie. a DLL). My test application is ODBC Test (odbct32w.exe) which allows me to explicitly call the ODBC API functions in my DLL. When I call one of the functions which has a known segfault, instead of crashing the application, ODBC Test simply returns control to the UI without printing the result of the function call. I can then call any function in my driver again. I do know that technically the application calls the ODBC driver manager which loads and calls the functions in my driver. But that is beside the point as my segfault (or whatever is happening) causes the driver manager function to not return either (as evidenced by the application not printing a result). One of my co-workers with a similar machine experiences this same problem while another does not but we have not been able to determine any specific differences.

    Read the article

  • how can I make a "downstream only" copy of a file in TFS

    - by jcollum
    I've got this SQL script that needs to exist in two places in source control. I want to have only one real copy of this file and keep a virtual copy of the file in the other solution. One is needed for a unit test and the other for a development tool. The files, should, by definition, always be the same. If they have differences then there's a problem with our process. In Sourcegear I could make a virtual copy of a specific version of a file and keep it somewhere else in the source tree. That doesn't seem to be possible in TFS. Is it possible in SVN? So what are my options here? Branching/merging -- which is what the TFS team says I should be doing here -- means just another step that I have to remember to do. Plus it isn't automatic and I would prefer that this be automated. Is there some way to run an exe on checkin of a specific file? I'm thinking if I could do that then I could do a checkout-edit-checkin of the downstream copy of the file.

    Read the article

  • Java: GatheringByteChannel advantages?

    - by Jason S
    I'm wondering when the GatheringByteChannel's write methods (taking in an array of ByteBuffers) have advantages over the "regular" WritableByteChannel write methods. I tried a test where I could use the regular vs. the gathering write method on a FileChannel, with approx 400KB/sec total in ByteBuffers of between 23-27 bytes in length in both cases. Gathering writes used an array of 64. The regular method used up approx 12% of my CPU, and the gathering method used up approx 16% of my CPU (worse than the regular method!) This tells me it's NOT useful to use gathering writes on a FileChannel around this range of operating parameters. Why would this be the case, and when would you ever use GatheringByteChannel? (on network I/O?) Relevant differences here: public void log(Queue<Packet> packets) throws IOException { if (this.gather) { int Nbuf = 64; ByteBuffer[] bbufs = new ByteBuffer[Nbuf]; int i = 0; Packet p; while ((p = packets.poll()) != null) { bbufs[i++] = p.getBuffer(); if (i == Nbuf) { this.fc.write(bbufs); i = 0; } } if (i > 0) { this.fc.write(bbufs, 0, i); } } else { Packet p; while ((p = packets.poll()) != null) { this.fc.write(p.getBuffer()); } } }

    Read the article

  • What's the meaning of 'char (*p)[5];'?

    - by jpmelos
    people. I'm trying to grasp the differences between these three declarations: char p[5]; char *p[5]; char (*p)[5]; I'm trying to find this out by doing some tests, because every guide of reading declarations and stuff like that has not helped me so far. I wrote this little program and it's not working (I've tried other kinds of use of the third declaration and I've ran out of options): #include <stdio.h> #include <string.h> #include <stdlib.h> int main(void) { char p1[5]; char *p2[5]; char (*p3)[5]; strcpy(p1, "dead"); p2[0] = (char *) malloc(5 * sizeof(char)); strcpy(p2[0], "beef"); p3[0] = (char *) malloc(5 * sizeof(char)); strcpy(p3[0], "char"); printf("p1 = %s\np2[0] = %s\np3[0] = %s\n", p1, p2[0], p3[0]); return 0; } The first and second works alright, and I've understood what they do. What is the meaning of the third declaration and the correct way to use it? Thank you!

    Read the article

  • Color difference between vista and Win7

    - by MSGrimpeur
    Have an indicator in the form of an image which is displayed in a graphics viewport. The indicator can be any colour the user selects so we created a single image with a pallette and change a specific color in the pallette to the one the user picks using the following code. /// <summary> /// Copies the image and sets transparency and fill colour of the copy. The image is intended to be a simple filled shape such as a square /// with the inside all in one colour. /// </summary> /// <remarks>Assumes the fill colour to be changed is Red, /// black is the boundary colour and off white (RGB 233,233,233) is the colour to be made transparent</remarks> /// <param name="image"></param> /// <param name="fillColour"></param> /// <returns></returns> protected Bitmap CopyWithStyle(Bitmap image, Color fillColour) { ColorPalette selectionIndicatorPalette = image.Palette; int fillColourIndex = selectionIndicatorPalette.IndexOf(Color.Red); selectionIndicatorPalette.Entries[fillColourIndex] = fillColour; image.Palette = selectionIndicatorPalette; Bitmap tempImage = image; tempImage.MakeTransparent(transparentColour); return tempImage; } To be honest I'm not sure if this is a bit cludgy and there is some smarter approach or not, so any thoughts there would help. However the main issue is that this appears to work fine on Win7 but in vista and XP the color does not change. Has any one seen this before. I've found one or two articles that suggest there are some differences in ARGB between them but nothing particularly concrete. Any help greatfully accepted.

    Read the article

  • JAR file: Could not find main class

    - by ApertureT3CH
    Okay, I have a strange problem. I wanted to run one of my programs as a .jar file, but when I open it by double-clicking it, I get an error message like "Could not find main class, program is shutting down". I'm pretty sure I did everything right, the jar should work afaik. I also tried other programs, it's the same with every single one. (I'm creating the .jar's through BlueJ) There is no problem when I run them through a .bat . And here comes the strangest thing of all: The .jar's have worked some time ago (one or two months I guess), and I don't remember doing anything different. It's the same BlueJ-Version. Okay, maybe Java updated and something got messed up... I googled, but I couldn't find a solution. (some people seem to have a similar problem, and it seems to be only them who can't run their .jar's; they uploaded them and other people say the .jar's run fine.) What could be the problem? How can I solve it? I'd really appreciate some help here. Thank you :) ApertureT3CH EDIT: okay guys, you're making me unsure here. Imma check the manifest again, at this unholy time ( 1:34 am ) :P EDIT2: This is my MANIFEST.MF Manifest-Version: 1.0 Class-Path: Main-Class: LocalChatClientGUI [empty line] [empty line] The Main class is correct. EDIT3: Thanks to hgrey: There is nothing wrong with the jar. I can run it from a bat file, which actually should not be different from double-clicking the jar, right? Yet I get the error when clicking it, and it works fine through the bat. EDIT4: I finally solved the problem. I re-installed the JRE and now it works, although I can't see any version differences. Thanks to everyone!

    Read the article

  • Misaligned Pointer Performance

    - by Elite Mx
    Aren't misaligned pointers (in the BEST possible case) supposed to slow down performance and in the worst case crash your program (assuming the compiler was nice enough to compile your invalid c program). Well, the following code doesn't seem to have any performance differences between the aligned and misaligned versions. Why is that? /* brutality.c */ #ifdef BRUTALITY xs = (unsigned long *) ((unsigned char *) xs + 1); #endif ... /* main.c */ #include <stdio.h> #include <stdlib.h> #define size_t_max ((size_t)-1) #define max_count(var) (size_t_max / (sizeof var)) int main(int argc, char *argv[]) { unsigned long sum, *xs, *itr, *xs_end; size_t element_count = max_count(*xs) >> 4; xs = malloc(element_count * (sizeof *xs)); if(!xs) exit(1); xs_end = xs + element_count - 1; sum = 0; for(itr = xs; itr < xs_end; itr++) *itr = 0; #include "brutality.c" itr = xs; while(itr < xs_end) sum += *itr++; printf("%lu\n", sum); /* we could free the malloc-ed memory here */ /* but we are almost done */ exit(0); } Compiled and tested on two separate machines using gcc -pedantic -Wall -O0 -std=c99 main.c for i in {0..9}; do time ./a.out; done

    Read the article

  • jquery 2 fades complete at the same time

    - by odavy
    Hello again, just another quick one: I am noticing differences with fadeOut dependant on whether this is a target. Here's my structure. I have rows of data on my page, and each row has two icons. One is an update icon for that row, one is a delete icon for that row. When a user clicks the update icon for a particular row, I want both the update and the delete icons to fade away. So, in order to fade the thing the user clicked (the update button) and its corresponding delete button, I am using... $(this).next().add(this).fadeOut('slow'); ...which works, but the two elements don't fade at the same time. this fades first (which is the update icon), and then this.next fades out (the delete icon). But if I specify the two elements by name... $('#updS2, #delS2').fadeOut('slow'); then they fade together. Why is it different? Apologies for Friday rambling.

    Read the article

  • CSS Style Firefox/Safari/Chrome

    - by patrick
    hi, i have a problem with css differences between browsers. i have a simple input textfield an a submit button. the should be arranged. with webkit (safari/webkit) everything looks fine but firefox doesnt do it. does anyone have an idea whats wrong? i have written a little test html page: <html> <head> <style type="text/css"> #input { background: none repeat scroll 0 0 white; border-color: #DCDCDC; border-style: solid; border-width: 1px 0 1px 1px; font: 13px "Lucida Grande",Arial,Sans-serif; margin: 0; padding: 5px 5px 5px 15px; width: 220px; outline-width: 0; height: 30px; } #submit { background: none repeat scroll 0 0 white; border: 1px solid #DCDCDC; font: 13px "Lucida Grande",Arial,Sans-serif; margin: 0; outline-width: 0; height: 30px; padding: 5px 10px; } </style> </head> <body> <input id="input" type="text" value="" /><input id="submit" type="button" value="Add" /> </body> </html>

    Read the article

  • Java vs. C variant for desktop and tablet development

    - by MirroredFate
    I am going to write a desktop application, but I am conflicted concerning which language to use. It (the desktop application) will need to have a good GUI, and to be extendable (hopefully good with modules of some sort). It must be completely cross-platform, including executable in various tablet environments. I put this as a requirement while realizing that some modification will no doubt be necessary. The language should also have some form of networking tools available. I have read http://introcs.cs.princeton.edu/java/faq/c2java.html and understand the differences between Java and C very well. I am looking not necessarily at C, but more at a C variant. If it is a complete toss-up, I will use Java as I know Java much better. However, I do not want to use a language that will be inferior for the task I wish to accomplish. Thank you for all suggestions and explanations. NOTE: If this is not the correct stack for this question, I apologize. It seemed appropriate according to the rules.

    Read the article

  • [PHP] Does unsetting array values during iterating save on memory?

    - by saturn_rising
    Hello fellow code warriors, This is a simple programming question, coming from my lack of knowledge of how PHP handles array copying and unsetting during a foreach loop. It's like this, I have an array that comes to me from an outside source formatted in a way I want to change. A simple example would be: $myData = array('Key1' => array('value1', 'value2')); But what I want would be something like: $myData = array([0] => array('MyKey' => array('Key1' => array('value1', 'value2')))); So I take the first $myData and format it like the second $myData. I'm totally fine with my formatting algorithm. My question lies in finding a way to conserve memory since these arrays might get a little unwieldy. So, during my foreach loop I copy the current array value(s) into the new format, then I unset the value I'm working with from the original array. E.g.: $formattedData = array(); foreach ($myData as $key => $val) { // do some formatting here, copy to $reformattedVal $formattedData[] = $reformattedVal; unset($myData[$key]); } Is the call to unset() a good idea here? I.e., does it conserve memory since I have copied the data and no longer need the original value? Or, does PHP automatically garbage collect the data since I don't reference it in any subsequent code? The code runs fine, and so far my datasets have been too negligible in size to test for performance differences. I just don't know if I'm setting myself up for some weird bugs or CPU hits later on. Thanks for any insights. -sR

    Read the article

  • Unable to set up SSL support for Apache 2 on Debian

    - by Francesco
    I am trying to set up ssl support for Apache 2 on Debian. Versions are: Debian GNU/Linux 6.0 apache2 2.2.16-6+squeeze1 I followed a lot of how-tos for days but I couldn't make it work. Here are my steps and configuration files (ServerName and DocumentRoot are changed for privacy, in case tell me): # mkdir /etc/apache2/ssl # openssl req $@ -new -x509 -days 365 -nodes -out /etc/apache2/apache.pem -keyout /etc/apache2/apache.pem at this point I've a doubt about permissions on apache.pem, at this step they are -rw-r--r-- 1 root root Maybe it has to belong to www-data? Then I enable ssl-mod with # a2enmod ssl # /etc/init.d/apache2 restart I modify /etc/apache2/sites-available/default-ssl in this way (I put port 8080 because I need port 443 for another purpose): <VirtualHost *:8080> SSLEngine on SSLCertificateFile /etc/apache2/ssl/apache.pem SSLCertificateKeyFile /etc/apache2/ssl/apache.pem ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options Indexes FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> <VirtualHost *:8080> DocumentRoot /home/user1/public_html/ ServerName first.server.org # Other directives here </VirtualHost> <VirtualHost *:8080> DocumentRoot /home/user2/public_html/ ServerName second.server.org # Other directives here </VirtualHost> I have to point out that the same configuration works on http (it is a copy of /etc/apache2/sites-available/default with some differences - port and ssl support). My /etc/apache2/ports.conf is the following: # If you just change the port or add more ports here, you will likely also # have to change the VirtualHost statement in # /etc/apache2/sites-enabled/000-default # This is also true if you have upgraded from before 2.2.9-3 (i.e. from # Debian etch). See /usr/share/doc/apache2.2-common/NEWS.Debian.gz and # README.Debian.gz #NameVirtualHost *:80 Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. #NameVirtualHost *:8080 Listen 8080 </IfModule> <IfModule mod_gnutls.c> Listen 8080 </IfModule> Any suggestion? Thanks

    Read the article

  • SQL 2000: Intermittent Error 7399 with OLE DB Provider for Microsoft Jet

    - by Tim Lara
    I am using SQL Server 2000 on Windows Server 2003 SP2 and have set up a linked server to point at an Access 97 database using the OLE DB Provider 4.0 for Microsoft Jet. The problem I am having sounds almost exactly like the one described in this Microsoft KB article, except that the error I am getting is intermittent: http://support.microsoft.com/kb/814398 The SQL Server is running under the Local System account (which I don't have authority to change), and the Access 97 .mdb file that the linked server points to is on a Win XP Pro machine on the same LAN as the SQL Server machine, inside of a shared folder with permissions set to "Everyone" and "Full Control". Now, if the linked server connection never worked, it would make more sense that the problem is merely a permissions issue with the Local System account as the KB article above suggests, but the maddening thing is that sometimes the connection works just fine. When it fails, the error message is always the same: Error 7399: OLE DB provider 'Microsoft.Jet.OLEDB.4.0' reported an error. [OLE/DB provider returned message: Unspecified error] OLE DB error trace [OLE/DB Provider 'Microsoft.Jet.OLEDB.4.0' IDBInitialize::Initialize returned 0x80004005: ]. Also, not only does the linked server setup occasionally work just fine on this one particular SQL Server, what is supposed to be exactly the same setup on 25 other servers works just fine EVERY TIME! Obviously, something in the non-working setup must not be exactly the same, but I'm having trouble figuring out where to look for the differences since the error message SQL Server returns is so vague. I know our sysadmins have had numerous issues with Active Directory replication across our domain, so my best guess is that there is some sort of odd group policy corruption going on, but I thought I'd ask here to see if I might be overlooking something more straightforward. Any ideas on how to further isolate the error would be greatly appreciated! For the record, here is a list of things I've already tried: Rebooting the SQL Server machine. Fixes the issue temporarily, then the error returns within a minute or two of startup. (This is why I suspect a rogue group policy that is slow to apply fouling things up.) Importing all database objects from the Access 97 mdb into a new, clean mdb file. Makes no difference. Moving the Access 97 mdb file to a local directory on the SQL Server machine instead of accessing it via a share on the Win XP Pro LAN machine. This works, but does not solve the problem because the mdb needs to be on the client machine for performance reasons and the ability to work "stand alone". Plus, the same shared folder access works fine on all other servers / clients on my network. Compared all the SQL Server, Windows Server, etc versions to a known working setup and everything appears to be the same.

    Read the article

  • SQL 2000: Intermittent Error 7399 with OLE DB Provider for Microsoft Jet

    - by Tim Lara
    I am using SQL Server 2000 on Windows Server 2003 SP2 and have set up a linked server to point at an Access 97 database using the OLE DB Provider 4.0 for Microsoft Jet. The problem I am having sounds almost exactly like the one described in this Microsoft KB article, except that the error I am getting is intermittent: http://support.microsoft.com/kb/814398 The SQL Server is running under the Local System account (which I don't have authority to change), and the Access 97 .mdb file that the linked server points to is on a Win XP Pro machine on the same LAN as the SQL Server machine, inside of a shared folder with permissions set to "Everyone" and "Full Control". Now, if the linked server connection never worked, it would make more sense that the problem is merely a permissions issue with the Local System account as the KB article above suggests, but the maddening thing is that sometimes the connection works just fine. When it fails, the error message is always the same: Error 7399: OLE DB provider 'Microsoft.Jet.OLEDB.4.0' reported an error. [OLE/DB provider returned message: Unspecified error] OLE DB error trace [OLE/DB Provider 'Microsoft.Jet.OLEDB.4.0' IDBInitialize::Initialize returned 0x80004005: ]. Also, not only does the linked server setup occasionally work just fine on this one particular SQL Server, what is supposed to be exactly the same setup on 25 other servers works just fine EVERY TIME! Obviously, something in the non-working setup must not be exactly the same, but I'm having trouble figuring out where to look for the differences since the error message SQL Server returns is so vague. I know our sysadmins have had numerous issues with Active Directory replication across our domain, so my best guess is that there is some sort of odd group policy corruption going on, but I thought I'd ask here to see if I might be overlooking something more straightforward. Any ideas on how to further isolate the error would be greatly appreciated! For the record, here is a list of things I've already tried: Rebooting the SQL Server machine. Fixes the issue temporarily, then the error returns within a minute or two of startup. (This is why I suspect a rogue group policy that is slow to apply fouling things up.) Importing all database objects from the Access 97 mdb into a new, clean mdb file. Makes no difference. Moving the Access 97 mdb file to a local directory on the SQL Server machine instead of accessing it via a share on the Win XP Pro LAN machine. This works, but does not solve the problem because the mdb needs to be on the client machine for performance reasons and the ability to work "stand alone". Plus, the same shared folder access works fine on all other servers / clients on my network. Compared all the SQL Server, Windows Server, etc versions to a known working setup and everything appears to be the same.

    Read the article

  • Why is vCenter 5.1u1 exiting hosts from maintenance mode?

    - by Shane Madden
    This vCenter server was just upgraded to 5.1 update 1. I'm going through hosts and bringing firmware up to date, then upgrading them from various versions of 5.0 to 5.1u1. vCenter 5.1u1 seems to have an interesting new behavior: it's removing hosts from maintenance mode when they reconnect after being disconnected -- but very inconsistently, I've seen it maybe 4 or 5 times on ~25-30 host reboots. I've only seen it happen on 5.0 hosts that have not yet been upgraded to 5.1. In the image, I placed the host in maint mode and rebooted it into the HP SPP DVD's automatic update mode. After its usual ~40 minute update process, the host came back online.. and 7 seconds before even logging that the host had reconnected, vCenter had sent the host a task to exit maintenance mode. In my understanding, the only time vCenter should drop a host out of maintenance mode is when vCenter put it into maintenance mode itself (such as a VUM upgrade task). Why would this vCenter be unilaterally exiting a host from user-initiated maintenance mode? Edit, additional info: I ran the firmware upgrades on 5 more hosts, all at the same time. Two of them exited maint mode after reconnecting, three did not. The common factor of those exiting maint mode seems to be how long they were offline; the two that took a few tries to boot to the virtual media are the two that got knocked out of maint mode. esx31 (image above): 45 minutes unresponsive esx19 (exited maint): 87 minutes unresponsive esx24 (stayed in maint): 32 minutes unresponsive esx29 (stayed in maint): 39 minutes unresponsive esx32 (stayed in maint): 30 minutes unresponsive esx34 (exited maint): 70 minutes unresponsive Edit: The disconnect time idea seems to have been a red herring, as it's not happening consistently. Additionally, in the vpxd.log the exit maint mode task initiation seems to always immediately follow this vim.EnvironmentBrowser.queryProvisioningPolicy SOAP call. Here's the lines, slightly trimmed for clarity: 15:27:49.535 [info 'vpxdvpxdVmomi'] [ClientAdapterBase::InvokeOnSoap] Invoke done (esx31, vim.EnvironmentBrowser.queryProvisioningPolicy) 15:27:49.560 [info 'commonvpxLro'] [VpxLRO] -- BEGIN task -- esx31 -- HostSystem.exitMaintenanceMode -- Note that on the nodes that don't get the exit task, the vim.EnvironmentBrowser.queryProvisioningPolicy event still occurs. I'm not seeing any other differences in events before or after this in the reconnect process, aside from the extra events caused by exiting maintenance mode. Given the log's mention of provisioning policies, looking for autodeploy-related maintenance mode issues turns up complaints about similar behavior (though I'm not using autodeploy at all).

    Read the article

  • EC2 Filesystem / Files stored on the wrong partiton after launching new instance from AMI

    - by Philip Isaacs
    Today I set up a new EC2 Instance from and AMI I created from an older EC2 instance. When I launched the new instance I took the AMI that was on a small instance and launched with a medium instance. From what I can tell this is pretty standard stuff. But here's the stang part. According to AWS these are the differences Small Instance (Default) 1.7 GB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit), 160 GB of local instance storage, 32-bit or 64-bit platform Medium Instance 3.75 GB of memory, 2 EC2 Compute Units (1 virtual core with 2 EC2 Compute Units each), 410 GB of local instance storage, 32-bit or 64-bit platform Okay now here's where I'm having an issue. I when I log into the new bigger instance it still reports only having 1.7 GB of ram. The other strange part is that all my old partitions are still their in the same configurations. I see a new larger partition /mnt which is essential empty. Filesystem Size Used Avail Use% Mounted on /dev/sda1 7.9G 5.9G 1.6G 79% / none 846M 120K 846M 1% /dev none 879M 0 879M 0% /dev/shm none 879M 76K 878M 1% /var/run none 879M 0 879M 0% /var/lock none 879M 0 879M 0% /lib/init/rw /dev/sda2 335G 195M 318G 1% /mnt /dev/sdf 16G 9.9G 5.1G 67% /var2 This EC2 is a web server and I was serving files off the /var2 directory but for some reason the instance is storing everything on / Okay here's what I'd like to do. Move all my website files to /mnt and have the web server point to that. Any suggestions? If it helps here is what my fstab looks like as well. root@myserver:/var# mount -l /dev/sda1 on / type ext3 (rw) [cloudimg-rootfs] proc on /proc type proc (rw,noexec,nosuid,nodev) none on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) none on /dev type devtmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755) /dev/sda2 on /mnt type ext3 (rw) /dev/sdf on /var2 type ext4 (rw,noatime) I hope this question makes sense. Basically i want my old files on this new partition. Thanks in advance

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >