Search Results

Search found 11975 results on 479 pages for 'vs templates'.

Page 445/479 | < Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >

  • [jQuery] JS inside the template

    - by Martin Trigaux
    Hello, I'm trying to include some javascript code inside a template. The code of my html page : <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html;charset=utf-8" /> <script type="text/javascript" src="jquery-1.4.2.min.js"></script> <script type="text/javascript" src="jquery-jtemplates.js"></script> </head> <body> <div id="infos"></div> <div id="my_template"></div> <script type="text/javascript"> $(document).ready(function() { $('#my_template').setTemplateURL("my_template.html"); try { $('#my_template').processTemplate({'var' : 1}); } catch (e) { $('#infos').html('error : '+e); } $('#my_button').click(function(){ alert('it works outside'); }); }); </script> </body> </html> and the template content of the template<br/> {#if $T.var == 1} <script type="text/javascript"> $(document).ready(function() { $('#my_button').click(function(){ alert('it works inside'); }); }); </script> <input type='submit' id='my_button' value='click me' onclick='alert("direct");'/> {#else} not working {#/if} produce me an error inside the infos balise error : SyntaxError: missing } after function body if I just put alert('it works inside'); inside the script balise (remove all the jquery related code), the page load, the two message "direct" and "it works outside" are showed but not "it works inside" message. It's suppose to works as said on the doc page Allow to use JavaScript code in templates Thank you

    Read the article

  • Dynamic Dispatch without Virtual Functions

    - by Kristopher Johnson
    I've got some legacy code that, instead of virtual functions, uses a kind field to do dynamic dispatch. It looks something like this: // Base struct shared by all subtypes // Plain-old data; can't use virtual functions struct POD { int kind; int GetFoo(); int GetBar(); int GetBaz(); int GetXyzzy(); }; enum Kind { Kind_Derived1, Kind_Derived2, Kind_Derived3 }; struct Derived1: POD { Derived1(): kind(Kind_Derived1) {} int GetFoo(); int GetBar(); int GetBaz(); int GetXyzzy(); // plus other type-specific data and function members }; struct Derived2: POD { Derived2(): kind(Kind_Derived2) {} int GetFoo(); int GetBar(); int GetBaz(); int GetXyzzy(); // plus other type-specific data and function members }; struct Derived3: POD { Derived3(): kind(Kind_Derived3) {} int GetFoo(); int GetBar(); int GetBaz(); int GetXyzzy(); // plus other type-specific data and function members }; and then the POD class's function members are implemented like this: int POD::GetFoo() { // Call kind-specific function switch (kind) { case Kind_Derived1: { Derived1 *pDerived1 = static_cast<Derived1*>(this); return pDerived1->GetFoo(); } case Kind_Derived2: { Derived2 *pDerived2 = static_cast<Derived2*>(this); return pDerived2->GetFoo(); } case Kind_Derived3: { Derived3 *pDerived3 = static_cast<Derived3*>(this); return pDerived3->GetFoo(); } default: throw UnknownKindException(kind, "GetFoo"); } } POD::GetBar(), POD::GetBaz(), POD::GetXyzzy(), and other members are implemented similarly. This example is simplified. The actual code has about a dozen different subtypes of POD, and a couple dozen methods. New subtypes of POD and new methods are added pretty frequently, and so every time we do that, we have to update all these switch statements. The typical way to handle this would be to declare the function members virtual in the POD class, but we can't do that because the objects reside in shared memory. There is a lot of code that depends on these structs being plain-old-data, so even if I could figure out some way to have virtual functions in shared-memory objects, I wouldn't want to do that. So, I'm looking for suggestions as to the best way to clean this up so that all the knowledge of how to call the subtype methods is centralized in one place, rather than scattered among a couple dozen switch statements in a couple dozen functions. What occurs to me is that I can create some sort of adapter class that wraps a POD and uses templates to minimize the redundancy. But before I start down that path, I'd like to know how others have dealt with this.

    Read the article

  • Is it any loose coupling mechanism in Objective-C + Cocoa like C# delegates or C++Qt signals+slots?

    - by Eye of Hell
    Hello. For a large programs, the standard way to chalenge a complexity is to divide a program code into small objects. Most of the actual programming languages offer this functionality via classes, so is Objective-C. But after source code is separated into small object, the second challenge is to somehow connect them with each over. Standard approaches, supported by most languages are compositon (one object is a member field of another), inheritance, templates (generics) and callbacks. More cryptic techniques include method-level delagates (C#) and signals+slots (C++Qt). I like the delegates / signals idea, since while connecting two objects i can connect individual methods with each over, without objects knowing anything of each over. For C#, it will look like this: var object1 = new CObject1(); var object2 = new CObject2(); object1.SomethingHappened += object2.HandleSomething; In this code, is object1 calls it's SomethingHappened delegate (like a normal method call) the HandleSomething method of object2 will be called. For C++Qt, it will look like this: var object1 = new CObject1(); var object2 = new CObject2(); connect( object1, SIGNAL(SomethingHappened()), object2, SLOT(HandleSomething()) ); The result will be exactly the same. This technique has some advantages and disadvantages, but generally i like it more than interfaces since if program code base grows i can change connections and add new ones without creating tons of interfaces. After examination of Objective-C i havn't found any way to use this technique i like :(. It seems that Objective-C supports message passing perfectly well, but it requres for object1 to have a pointer to object2 in order to pass it a message. If some object needs to be connected to lots of other objects, in Objective-C i will be forced to give him pointers to each of the objects it must be connected. So, the question :). Is it any approach in Objective-C programming that will closely resemble delegate / signal+slot types of connection, not a 'give first object an entire pointer to second object so it can pass a message to it'. Method-level connections are a bit more preferable to me than object-level connection ^_^.

    Read the article

  • FILE_NOT_FOUND when trying to open COM port C++

    - by Moutabreath
    I am trying to open a com port for reading and writing using C++ but I can't seem to pass the first stage of actually opening it. I get an INVALID_HANDLE_VALUE on the handle with GetLastError FILE_NOT_FOUND. I have searched around the web for a couple of days I'm fresh out of ideas. I have searched through all the questions regarding COM on this website too. I have scanned through the existing ports (or so I believe) to get the name of the port right. I also tried combinations of _T("COM1") with the slashes, without the slashes, with colon, without colon and without the _T I'm using windows 7 on 64 bit machine. this is the code i got I'll be glad for any input on this void SendToCom(char* data, int len) { DWORD cbNeeded = 0; DWORD dwPorts = 0; EnumPorts(NULL, 1, NULL, 0, &cbNeeded, &dwPorts); //What will be the return value BOOL bSuccess = FALSE; LPCSTR COM1 ; BYTE* pPorts = static_cast<BYTE*>(malloc(cbNeeded)); bSuccess = EnumPorts(NULL, 1, pPorts, cbNeeded, &cbNeeded, &dwPorts); if (bSuccess){ PORT_INFO_1* pPortInfo = reinterpret_cast<PORT_INFO_1*>(pPorts); for (DWORD i=0; i<dwPorts; i++) { //If it looks like "COMX" then size_t nLen = _tcslen(pPortInfo->pName); if (nLen > 3) { if ((_tcsnicmp(pPortInfo->pName, _T("COM"), 3) == 0) ){ COM1 =pPortInfo->pName; //COM1 ="\\\\.\\COM1"; HANDLE m_hCommPort = CreateFile( COM1 , GENERIC_READ|GENERIC_WRITE, // access ( read and write) 0, // (share) 0:cannot share the COM port NULL, // security (None) OPEN_EXISTING, // creation : open_existing FILE_FLAG_OVERLAPPED, // we want overlapped operation NULL // no templates file for COM port... ); if (m_hCommPort==INVALID_HANDLE_VALUE) { DWORD err = GetLastError(); if (err == ERROR_FILE_NOT_FOUND) { MessageBox(hWnd,"ERROR_FILE_NOT_FOUND",NULL,MB_ABORTRETRYIGNORE); } else if(err == ERROR_INVALID_NAME) { MessageBox(hWnd,"ERROR_INVALID_NAME",NULL,MB_ABORTRETRYIGNORE); } else { MessageBox(hWnd,"unkown error",NULL,MB_ABORTRETRYIGNORE); } } else{ WriteAndReadPort(m_hCommPort,data); } } pPortInfo++; } } } }

    Read the article

  • jquery ajax form success callback not being called

    - by Michael Merchant
    I'm trying to upload a file using "AJAX", process data in the file and then return some of that data to the UI so I can dynamically update the screen. I'm using the JQuery Ajax Form Plugin, jquery.form.js found at http://jquery.malsup.com/form/ for the javascript and using Django on the back end. The form is being submitted and the processing on the back end is going through without a problem, but when a response is received from the server, my Firefox browser prompts me to download/open a file of type "application/json". The file has the json content that I've been trying to send to the browser. I don't believe this is an issue with how I'm sending the json as I have a modularized json_wrapper() function that I'm using in multiple places in this same application. Here is what my form looks after Django templates are applied: <form method="POST" enctype="multipart/form-data" action="/test_suites/active/upload_results/805/"> <p> <label for="id_resultfile">Upload File:</label> <input type="file" id="id_resultfile" name="resultfile"> </p> </form> You won't see any submit buttons because I'm calling submit with a button else where and am using ajaxSubmit() from the jquery.form.js plugin. Here is the controlling javascript code: function upload_results($dialog_box){ $form = $dialog_box.find("form"); var options = { type: "POST", success: function(data){ alert("Hello!!"); }, dataType: "json", error: function(){ console.log("errors"); }, beforeSubmit: function(formData, jqForm, options){ console.log(formData, jqForm, options); }, } $form.submit(function(){ $(this).ajaxSubmit(options); return false; }); $form.ajaxSubmit(options); } As you can see, I've gotten desperate to see the success callback function work and simply have an alert message created on success. However, we never reach that call. Also, the error function is not called and the beforeSubmit function is executed. The file that I get back has the following contents: {"count": 18, "failed": 0, "completed": 18, "success": true, "trasaction_id": "SQEID0.231"} I use 'success' here to denote whether or not the server was able to run the post command adequately. If it failed the result would look something like: {"success": false, "message":"<error_message>"} Your time and help is greatly appreciated. I've spent a few days on this now and would love to move on.

    Read the article

  • Closing a process that open in code

    - by AmirHossein
    I create a WordTemplate with some placeholders for field,in code I insert value in this placeholders and show it to user. protected void Button1_Click(object sender, EventArgs e) { string DocFilePath = ""; //string FilePath = System.Windows.Forms.Application.StartupPath; object fileName = @"[...]\asset\word templates\FormatPeygiri1.dot"; DocFilePath = fileName.ToString(); FileInfo fi = new FileInfo(DocFilePath); if (fi.Exists) { object readOnly = false; object isVisible = true; object PaperNO = "PaperNO"; object PaperDate = "PaperDate"; object Peyvast = "Peyvast"; object To = "To"; object ShoName = "ShoName"; object DateName = "DateName"; Microsoft.Office.Interop.Word.Document aDoc = WordApp.Documents.Open(ref fileName, ref missing, ref readOnly, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref isVisible, ref isVisible, ref missing, ref missing, ref missing); WordApp.ActiveDocument.FormFields.get_Item(ref PaperNO).Result = TextBox_PaperNO.Text; string strPaperDate = string.Format("{0}/{1}/{2}", PersianDateTimeHelper.GetPersainDay(DateTimePicker_PaperDate.SelectedDate), PersianDateTimeHelper.GetPersainMonth(DateTimePicker_PaperDate.SelectedDate), PersianDateTimeHelper.GetPersainYear(DateTimePicker_PaperDate.SelectedDate)); WordApp.ActiveDocument.FormFields.get_Item(ref PaperDate).Result = strPaperDate; WordApp.ActiveDocument.FormFields.get_Item(ref Peyvast).Result = TextBox_Peyvast.Text; WordApp.ActiveDocument.FormFields.get_Item(ref To).Result = TextBox_To.Text; ; WordApp.ActiveDocument.FormFields.get_Item(ref ShoName).Result = TextBox_ShoName.Text; string strDateName = string.Format("{0}/{1}/{2}", PersianDateTimeHelper.GetPersainDay(DateTimePicker_DateName.SelectedDate), PersianDateTimeHelper.GetPersainMonth(DateTimePicker_DateName.SelectedDate), PersianDateTimeHelper.GetPersainYear(DateTimePicker_DateName.SelectedDate)); WordApp.ActiveDocument.FormFields.get_Item(ref DateName).Result = strDateName; aDoc.Activate(); WordApp.Visible = true; aDoc = null; WordApp = null; } else { MessageBox1.Show("File Not Exist!"); } it work good and successfully! but when a user close the Word,her Process not closed and exists in Task Manager Process list. this process name is WINWORD.exe I know that I can close process whit code [process.Kill()] but I don't know which process that I should to kill. if I want to kill all process with name [WINWORD.exe] all Word window closed.but I want to close specific Word window and kill process that I opened. How to do it?

    Read the article

  • Replace text in XSL using wildcards

    - by JosephThomas
    This is similar to an earlier problem I was having which you guys solved in less than a day. I am working with XML files that are generated by a digital video camera. The camera allows the user to save all of the camera's settngs to an SD card so that the settings can be recalled or loaded into another camera. The XSL stylesheet I am writing will allow users to view the camera's settings, as saved to the SD card in a web browser. While most of the values in the XML file -- as formatted by my stylesheet -- make sense to humans, some do not. What I would like to do is have the stylesheet display text that is based on the value in the XML file but more easily understood by humans. A typical value that can be written to the XML file is "_23_970" which represents the camera's frame rate. This would be better displayed as 23.970 (or 023.970). The first underscore is a sort of place holder to make a space for values over 099.999. The second underscore, obviously represents the decimal. My previous (similar) question involved replacing predictable text, and the solution was matching templates. In this case, however, the camera can be set at any one of 119,999 frame rates (I think I did that math correctly). The approach, I would guess, is to pass a value to the displayed webpage that keeps the numeric values (each digit), replaces the second underscore with a decimal, and replaces the first underscore with either an nbsp or a zero (whichever is easier). If the first character in the string is a "1" (the camera can run at frame rates up to 120.000) then the one should be passed on to the page displayed by the stylesheet. I have read other posts here regarding wildcards, but couldn't find one that answered this question. EDIT: Sorry for leaving out important info. I fared better on my first try at asking a question! I guess I got complacent. Anyhow . . . I should have shown you the code that displays the text in the XSL file as is: <tr> <xsl:for-each select="Settings/Groups/Recording"> <tr><td class="title_column">Frame Rate</td><td><xsl:value-of select="RecOutLinkSpeed"/></td></tr> </xsl:for-each> </tr> I should also have given you the URL for the sample file I have been working with: http://josephthomas.info/Alexa/Setup_120511_140322.xml

    Read the article

  • Database design advice needed.

    - by user346271
    Hi all, I'm a lone developer for a telecoms company, and am after some database design advice from anyone with a bit of time to answer. I am inserting into one table ~2 million rows each day, these tables then get archived and compressed on a monthly basis. Each monthly table contains ~15,000,000 rows. Although this is increasing month on month. For every insert I do above I am combining the data from rows which belong together and creating another "correlated" table. This table is currently not being archived, as I need to make sure I never miss an update to the correlated table. (Hope that makes sense) Although in general this information should remain fairly static after a couple of days of processing. All of the above is working perfectly. However my company now wishes to perform some stats against this data, and these tables are getting too large to provide the results in what would be deemed a reasonable time. Even with the appropriate indexes set. So I guess after all the above my question is quite simple. Should I write a script which groups the data from my correlated table into smaller tables. Or should I store the queries result sets in something like memcache? I'm already using mysqls cache, but due to having limited control over how long the data is stored for, it's not working ideally. The main advantages I can see of using something like memcache: No blocking on my correlated table after the query has been cashed. Greater flexibility of sharing the collected data between the backend collector and front end processor. (i.e custom reports could be written in the backend and the results of these stored in the cache under a key which then gets shared with anyone who would want to see the data of this report) Redundancy and scalability if we start sharing this data with a large amount of customers. The main disadvantages I can see of using something like memcache: Data is not persistent if machine is rebooted / cache is flushed. The main advantages of using MySql Persistent data. Less code changes (although adding something like memcache is trivial anyway) The main disadvantages of using MySql Have to define table templates every time I want to store provide a new set of grouped data. Have to write a program which loops through the correlated data and fills these new tables. Potentially will still grow slower as the data continues to be filled. Apologies for quite a long question. It's helped me to write down these thoughts here anyway, and any advice/help/experience with dealing with this sort of problem would be greatly appreciated. Many thanks. Alan

    Read the article

  • PHP create page as a string after PHP runs

    - by John
    I'm stuck on how to write the test.php page result (after php has run) to a string: testFunctions.php: <?php function htmlify($html, $format){ if ($format == "print"){ $html = str_replace("<", "&lt;", $html); $html = str_replace(">", "&gt;", $html); $html = str_replace("&nbsp;", "&amp;nbsp;", $html); $html = nl2br($html); return $html; } }; $input = <<<HTML <div style="background color:#959595; width:400px;"> &nbsp;<br> input <b>text</b> <br>&nbsp; </div> HTML; function content($input, $mode){ if ($mode =="display"){ return $input; } else if ($mode =="source"){ return htmlify($input, "print"); }; }; function pagePrint($page){ $a = array( 'file_get_contents' => array($page), 'htmlify' => array($page, "print") ); foreach($a as $func=>$args){ $x = call_user_func_array($func, $args); $page .= $x; } return $page; }; $file = "test.php"; ?> test.php: <?php include "testFunctions.php"; ?> <br><hr>here is the rendered html:<hr> <?php $a = content($input, "display"); echo $a; ?> <br><hr>here is the source code:<hr> <?php $a = content($input, "source"); echo $a; ?> <br><hr>here is the source code of the entire page after the php has been executed:<hr> <div style="margin-left:40px; background-color:#ebebeb;"> <?php $a = pagePrint($file); echo $a; ?> </div> I'd like to keep all the php in the testFunctions.php file, so I can place simple function calls into templates for html emails. Thanks!

    Read the article

  • Template class + virtual function = must implement?

    - by sold
    This code: template <typename T> struct A { T t; void DoSomething() { t.SomeFunction(); } }; struct B { }; A<B> a; is easily compiled without any complaints, as long as I never call a.DoSomething(). However, if I define DoSomething as a virtual function, I will get a compile error saying that B doesn't declare SomeFunction. I can somewhat see why it happens (DoSomething should now have an entry in the vtable), but I can't help feeling that it's not really obligated. Plus it sucks. Is there any way to overcome this? EDIT 2: Okay. I hope this time it makes sence: Let's say I am doing intrusive ref count, so all entities must inherit from base class Object. How can I suuport primitive types too? I can define: template <typename T> class Primitive : public Object { T value; public: Primitive(const T &value=T()); operator T() const; Primitive<T> &operator =(const T &value); Primitive<T> &operator +=(const T &value); Primitive<T> &operator %=(const T &value); // And so on... }; so I can use Primitive<int>, Primitive<char>... But how about Primitive<float>? It seems like a problem, because floats don't have a %= operator. But actually, it isn't, since I'll never call operator %= on Primitive<float>. That's one of the deliberate features of templates. If, for some reason, I would define operator %= as virtual. Or, if i'll pre-export Primitive<float> from a dll to avoid link errors, the compiler will complain even if I never call operator %= on a Primitive<float>. If it would just have fill in a dummy value for operator %= in Primitive<float>'s vtable (that raises an exception?), everything would have been fine.

    Read the article

  • C++ stream as a parameter when overloading operator<<

    - by TheOm3ga
    I'm trying to write my own logging class and use it as a stream: logger L; L << "whatever" << std::endl; This is the code I started with: #include <iostream> using namespace std; class logger{ public: template <typename T> friend logger& operator <<(logger& log, const T& value); }; template <typename T> logger& operator <<(logger& log, T const & value) { // Here I'd output the values to a file and stdout, etc. cout << value; return log; } int main(int argc, char *argv[]) { logger L; L << "hello" << '\n' ; // This works L << "bye" << "alo" << endl; // This doesn't work return 0; } But I was getting an error when trying to compile, saying that there was no definition for operator<<: pruebaLog.cpp:31: error: no match for ‘operator<<’ in ‘operator<< [with T = char [4]](((logger&)((logger*)operator<< [with T = char [4]](((logger&)(& L)), ((const char (&)[4])"bye")))), ((const char (&)[4])"alo")) << std::endl’ So, I've been trying to overload operator<< to accept this kind of streams, but it's driving me mad. I don't know how to do it. I've been loking at, for instance, the definition of std::endl at the ostream header file and written a function with this header: logger& operator <<(logger& log, const basic_ostream<char,char_traits<char> >& (*s)(basic_ostream<char,char_traits<char> >&)) But no luck. I've tried the same using templates instead of directly using char, and also tried simply using "const ostream& os", and nothing. Another thing that bugs me is that, in the error output, the first argument for operator<< changes, sometimes it's a reference to a pointer, sometimes looks like a double reference...

    Read the article

  • How to give properties to c++ classes (interfaces)

    - by caas
    Hello, I have built several classes (A, B, C...) which perform operations on the same BaseClass. Example: struct BaseClass { int method1(); int method2(); int method3(); } struct A { int methodA(BaseClass& bc) { return bc.method1(); } } struct B { int methodB(BaseClass& bc) { return bc.method2()+bc.method1(); } } struct C { int methodC(BaseClass& bc) { return bc.method3()+bc.method2(); } } But as you can see, each class A, B, C... only uses a subset of the available methods of the BaseClass and I'd like to split the BaseClass into several chunks such that it is clear what it used and what is not. For example a solution could be to use multiple inheritance: // A uses only method1() struct InterfaceA { virtual int method1() = 0; } struct A { int methodA(InterfaceA&); } // B uses method1() and method2() struct InterfaceB { virtual int method1() = 0; virtual int method2() = 0; } struct B { int methodB(InterfaceB&); } // C uses method2() and method3() struct InterfaceC { virtual int method2() = 0; virtual int method3() = 0; } struct C { int methodC(InterfaceC&); } The problem is that each time I add a new type of operation, I need to change the implementation of BaseClass. For example: // D uses method1() and method3() struct InterfaceD { virtual int method1() = 0; virtual int method3() = 0; } struct D { int methodD(InterfaceD&); } struct BaseClass : public A, B, C // here I need to add class D { ... } Do you know a clean way I can do this? Thanks for your help edit: I forgot to mention that it can also be done with templates. But I don't like this solution either because the required interface does not appear explicitly in the code. You have to try to compile the code to verify that all required methods are implemented correctly. Plus, it would require to instantiate different versions of the classes (one for each BaseClass type template parameter) and this is not always possible nor desired.

    Read the article

  • c++ template function compiles in header but not implementation

    - by flies
    I'm trying to learn templates and I've run into this confounding error. I'm declaring some functions in a header file and I want to make a separate implementation file where the functions will be defined. Here's the code that calls the header (dum.cpp): #include <iostream> #include <vector> #include <string> #include "dumper2.h" int main() { std::vector<int> v; for (int i=0; i<10; i++) { v.push_back(i); } test(); std::string s = ", "; dumpVector(v,s); } now, here's a working header file (dumper2.h): #include <iostream> #include <string> #include <vector> void test(); template <class T> void dumpVector( std::vector<T> v,std::string sep); template <class T> void dumpVector(std::vector<T> v, std::string sep) { typename std::vector<T>::iterator vi; vi = v.begin(); std::cout << *vi; vi++; for (;vi<v.end();vi++) { std::cout << sep << *vi ; } std::cout << "\n"; return; } with implentation (dumper2.cpp): #include <iostream> #include "dumper2.h" void test() { std::cout << "!olleh dlrow\n"; } the weird thing is that if I move the code that defines dumpVector from the .h to the .cpp file, I get the following error: g++ -c dumper2.cpp -Wall -Wno-deprecated g++ dum.cpp -o dum dumper2.o -Wall -Wno-deprecated /tmp/ccKD2e3G.o: In function `main': dum.cpp:(.text+0xce): undefined reference to `void dumpVector<int>(std::vector<int, std::allocator<int> >, std::basic_string<char, std::char_traits<char>, std::allocator<char> >)' collect2: ld returned 1 exit status make: *** [dum] Error 1 So why does it work one way and not the other? Clearly the compiler can find test(), so why can't it find dumpVector?

    Read the article

  • How to run VisualSVN Server on port 443 running IIS on same server?

    - by Metro Smurf
    Server 2008 R2 SP1 VisualSVN Server 2.1.6 The IIS server has about 10 sites. One of them uses https over port 443 with the following bindings: http x.x.x.39:80 site.com http x.x.x.39:80 www.site.com https x.x.x.39:443 VisualSVN Server Properties server name: svn.SomeSite.com server port: 443 Server Binding: x.x.x.40 No sites on IIS are listening to x.x.x.40. When starting up VisualSVN server, the following errors are thrown: make_sock: could not bind to address x.x.x.40:443 (OS 10013) An attempt was made to access a socket in a way forbidden by its access permissions. no listening sockets available, shutting down When I stop Site.com on IIS, then VisualSVN Server starts up without a problem. When I bind VisualSVN server to port 8443 and start Site.com, then VisualSVN Server starts without a problem. My goal is to be able to access the VisualSvn Server with a normal url, i.e., one that does't use a port number in the address: https://svn.site.com vs https://svn.site.com:8443 What needs to be configured to allow VisualSVN Server to run on port 443 with IIS running on the same server? Edit / Answer The answer provided by Ivan did point me in the right direction. For anyone else running into this, here is a bit more information. Even though my IIS had no bindings set to the IP address I am using for VisualSvn, IIS will still take the IP address hostage unless IIS is explicitly told which IP addresses to listen to. There is no GUI in Win Server 2k8 to configure the IP addresses for IIS to listen; by default, IIS listens to all IP addresses assigned to the server. The following will help configure IIS to only listen to the IP addresses you want: open a command prompt enter: netsh enter: http enter: show iplisten -- this will show a table of the IP addresses IIS is listening to. By default, the table will be empty (I guess this means IIS listens to all IP's) For each IP address IIS should listen to, enter: add iplisten ipaddress=x.x.x.x enter: show iplisten -- you should now see all the IP addresses added to the listening table. Exit and then reset IIS. Each of these commands can also be run directly, i.e., netsh http show iplisten If you need to delete an IP address from the listening table: open a command prompt enter: netsh enter: http enter: delete iplisten ipaddress=x.x.x.x Exit and then reset IIS.

    Read the article

  • Slow login to load-balanced Terminal Server 2008 behind Gateway Server

    - by Frans
    I have a small load-balanced (using Session Broker) Terminal Server 2008 farm behind a Gateway Server which is accessed from the Internet. The problem I have is that there is a delay of 20-30 seconds if the session broker switches the user to another server during login. I think this is related to the fact that I am forcing the security layer to be RDP rather than SSL. The background The Gateway server has a public routeable IP addres and DNS name so it can be accessed from the Internet and all users come in via this route (the system is used to provide access to hosted applications to external customers). The actual terminal servers only have internal IP addresses. This works really well, except that with a Vista or Windows 7 client, the Remote Desktop client will negotiate with the server to use SSL for the security layer. This then exposes the auto-generated certificate that TS1 or TS2 has - but since they are internal, auto-generated certificates, the client will get a stern warning that the certificate is not valid. I can't give the servers a properly authorised certificate as the servers do not have public routeable IP address or DNS name. Instead, I am using Group Policy to force the connections to be over RDP instead of SSL. \Computer Configuration\Policies\Administrative Templates\Windows Components\Terminal Services\Terminal Server\Security\Require use of specific security layer for remote (RDP) connections The Windows 7 user now gets a much less stern warning that "the server's identity cannot be confirmed" which I can live with. I don't have enough control over the end-user's machines to ask them to install a new root certificate either. TS1 and TS2 are also load-balanced using the Session Broker, which is installed on the Gateway Server. I am using round-robin DNS, so the user's initial connection will go via Gateway1 to either TS1 or TS2. TS1/TS2 will then talk to the session broker and may pass the user to the other server. I.e. the user may get connected to TS2, but after talking to the session broker the user may be passed to TS1, which is where they will run their session. When this switching of servers happens, in my setup, the screen sits with the word "Welcome" for 20-30 seconds after which it flickers, Welcome is shown again and then flashing through nthe normal login screens (i.e. "wait for user profile manager" etc). Having done some research, I think what is happening is that the user is being fully logged on to TS2 (while "Welcome" is shown) before being passed to TS1, where they are then logged in again. It is interesting that normally when you see the ""Welcome" word, the little circle to left rotates. However, it does not rotate during this delay - the screen just looks frozen. This blog post leads me to think that this is because CredSSP is not being used, probably because I am disallowing SSL and forcing RDP. What I have tried I enabled SSL again which removes the "Welcome" delay. However, it seems to introduc a new delay much earlier in the process. Specifically, when the RDP client is saying "initialising connection" - this is now much slower. Quite apart from the fact that my certificate problem precludes me using that solution without considerable difficulty. I tried disabling the load balancing (just remove the servers from the session broker farm) and the connections do not have any delay. The problem is also intermittent in the sense that it only happens when the user gets bumped from one server to another. I tested this by trying to connect directly to TS1 (via the Gateway, of course) and then checking which server I actually got connected to. Just to be sure, I also by-passed the round-robin DNS to see if it had any impact and it doesn't. The setup is essentially in line with MS recommendations here: TS Session Broker Load Balancing Step-by-Step Guide I tried changing to using a dedicated redirector. Basically, rather than using a round-robin DNS, I pointed my DNS to the Gateway server and configured it to be a dedicated redirector (disallow logons, add it to the farm). Same problem, alas. Any ideas or suggestions gratefully received.

    Read the article

  • Sign an OpenSSL .CSR with Microsoft Certificate Authority

    - by kce
    I'm in the process of building a Debian FreeRadius server that does 802.1x authentication for domain members. I would like to sign my radius server's SSL certificate (used for EAP-TLS) and leverage the domain's existing PKI. The radius server is joined to domain via Samba and has a machine account as displayed in Active Directory Users and Computers. The domain controller I'm trying to sign my radius server's key against does not have IIS installed so I can't use the preferred Certsrv webpage to generate the certificate. The MMC tools won't work as it can't access the certificate stores on the radius server because they don't exist. This leaves the certreq.exe utility. I'm generating my .CSR with the following command: openssl req -nodes -newkey rsa:1024 -keyout server.key -out server.csr The resulting .CSR: ******@mis-ke-lnx:~/G$ openssl req -text -noout -in mis-radius-lnx.csr Certificate Request: Data: Version: 0 (0x0) Subject: C=US, ST=Alaska, L=CITY, O=ORG, OU=DEPT, CN=ME/emailAddress=MYEMAIL Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (1024 bit) Modulus (1024 bit): 00:a8:b3:0d:4b:3f:fa:a4:5f:78:0c:24:24:23:ac: cf:c5:28:af:af:a2:9b:07:23:67:4c:77:b5:e8:8a: 08:2e:c5:a3:37:e1:05:53:41:f3:4b:e1:56:44:d2: 27:c6:90:df:ae:3b:79:e4:20:c2:e4:d1:3e:22:df: 03:60:08:b7:f0:6b:39:4d:b4:5e:15:f7:1d:90:e8: 46:10:28:38:6a:62:c2:39:80:5a:92:73:37:85:37: d3:3e:57:55:b8:93:a3:43:ac:2b:de:0f:f8:ab:44: 13:8e:48:29:d7:8d:ce:e2:1d:2a:b7:2b:9d:88:ea: 79:64:3f:9a:7b:90:13:87:63 Exponent: 65537 (0x10001) Attributes: a0:00 Signature Algorithm: sha1WithRSAEncryption 35:57:3a:ec:82:fc:0a:8b:90:9a:11:6b:56:e7:a8:e4:91:df: 73:1a:59:d6:5f:90:07:83:46:aa:55:54:1c:f9:28:3e:a6:42: 48:0d:6b:da:58:e4:f5:7f:81:ee:e2:66:71:78:85:bd:7f:6d: 02:b6:9c:32:ad:fa:1f:53:0a:b4:38:25:65:c2:e4:37:00:16: 53:d2:da:f2:ad:cb:92:2b:58:15:f4:ea:02:1c:a3:1c:1f:59: 4b:0f:6c:53:70:ef:47:60:b6:87:c7:2c:39:85:d8:54:84:a1: b4:67:f0:d3:32:f4:8e:b3:76:04:a8:65:48:58:ad:3a:d2:c9: 3d:63 I'm trying to submit my certificate using the following certreq.exe command: certreq -submit -attrib "CertificateTemplate:Machine" server.csr I receive the following error upon doing so: RequestId: 601 Certificate not issued (Denied) Denied by Policy Module The DNS name is unavailable and cannot be added to the Subject Alternate name. 0x8009480f (-2146875377) Certificate Request Processor: The DNS name is unavailable and cannot be added to the Subject Alternate name. 0x8009480f (-2146875377) Denied by Policy Module My certificate authority has the following certificate templates available. If I try to submit by certreq.exe using "CertificiateTemplate:Computer" instead of "CertificateTemplate:Machine" I get an error reporting that "the requested certificate template is not supported by this CA." My google-foo has failed me so far on trying to understand this error... I feel like this should be a relatively simple task as X.509 is X.509 and OpenSSL generates the .CSRs in the required PKCS10 format. I can't be only one out there trying to sign a OpenSSL generated key on a Linux box with a Windows Certificate Authority, so how do I do this (perferably using the off-line certreq.exe tool)?

    Read the article

  • 502 Bad Gateway - nginx

    - by ADH2
    I am randomly receiving 502 Bad Gateway error pages - I can reproduce this issue by modifying hosting plans in plesk 11 and in the same time refreshing a page for a minute or two. When I get the 502 error page all I have to do is refresh the browser and the page refreshes properly. i am using centos 6 this it from todays log (/var/log/nginx/error.log): 2012/12/04 10:52:07 [error] 21272#0: *545 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 82.77.68.111, server: likeit-craiova.ro, request: "GET / HTTP/1.1", upstream: "http://195.254.135.113:7080/", host: "likeit-craiova.ro" this is the nginx config (/etc/nginx/nginx.conf) #user nginx; worker_processes 1; #error_log /var/log/nginx/error.log; #error_log /var/log/nginx/error.log notice; #error_log /var/log/nginx/error.log info; #pid /var/run/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #tcp_nodelay on; #gzip on; #gzip_disable "MSIE [1-6]\.(?!.*SV1)"; server_tokens off; include /etc/nginx/conf.d/*.conf; } fastcgi config file (/etc/nginx/fastcgi.conf): fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param HTTPS $https if_not_empty; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; fastcgi parameters config (/etc/nginx/fastcgi_params): fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param HTTPS $https if_not_empty; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; alsow i'm getting this on a shared hosting server, on one of the domains: Unable to generate the web server configuration file on the host because of the following errors: nginx: [warn] duplicate MIME type "text/html" in /etc/nginx/nginx.conf:45 nginx: [emerg] open() "/var/www/vhosts/partydayandnight.ro/statistics/logs/proxy_access_log" failed (24: Too many open files) nginx: configuration file /etc/nginx/nginx.conf test failed Please resolve the errors in web server configuration templates and generate the file again. why is this appearing and what troubles may it cause? what can i do to get this errors fixed? thank you!

    Read the article

  • Automating the choice between JPEG and PNG with a script

    - by MHC
    Choosing the right format to save your images in is crucial for preserving image quality and reducing artifacts. Different formats follow different compression methods and come with their own set of advantages and disadvantages. JPG, for instance is suited for real life photographs that are rich in color gradients. The lossless PNG, on the other hand, is far superior when it comes to schematic figures: Picking the right format can be a chore when working with a large number of files. That's why I would love to find a way to automate it. A little bit of background on my particular use case: I am working on a number of handouts for a series of lectures at my unversity. The handouts are rich in figures, which I have to extract from PDF-formatted slides. Extracting these images gives me lossless PNGs, which are needlessly large at times. Converting these particular files to JPEG can reduce their size to up to less than 20% of their original file size, while maintaining the same quality. This is important as working with hundreds of large images in word processors is pretty crash-prone. Batch converting all extracted PNGs to JPEGs is not an option I am willing to follow, as many if not most images are better suited to be formatted as PNGs. Converting these would result in insignificant size reductions and sometimes even increases in filesize - that's at least what my test runs showed. What we can take from this is that file size after compression can serve as an indicator on what format is suited best for a particular image. It's not a particularly accurate predictor, but works well enough. So why not use it in form of a script: I included inotifywait because I would prefer for the script be executed automatically as soon as I drag an extracted image into a folder. This is a simpler version of the script that I've been using for the last couple of weeks: #!/bin/bash inotifywait -m --format "%w%f" --exclude '.jpg' -r -e create -e moved_to --fromfile '/home/MHC/.scripts/Workflow/Conversion/include_inotifywait' | while read file; do mogrify -format jpg -quality 92 "$file" done The advanced version of the script would have to be able to handle spaces in file names and directory names preserve the original file names flatten PNG images if an alpha value is set compare the file size between the temporary converted image and its original determine if the difference is greater than a given precentage act accordingly The actual conversion could be done with imagemagick tools: convert -quality 92 -flatten -background white file.png file.jpg Unfortunately, my bash skills aren't even close to advanced enough to convert the scheme above into an actual script, but I am sure many of you can. My reputation points on here are pretty low, but I will gladly award the most helpful answer with the highest bounty I can set. References: http://www.formortals.com/introducing-cnb-imageguide/, http://www.turnkeylinux.org/blog/png-vs-jpg Edit: Also see my comments below for some more information on why I think this script would be the best solution to the problem I am facing.

    Read the article

  • DNS problems with Google on Windows 7

    - by awishformore
    Hello dear superuser community. I had no idea where to post this because it is a problem that completely baffles me. I have a lot of experience with network configuration, but I am completely out of ideas on how to fix this. I have a Fritz!Box branded router on my ISP 1&1 in Germany. My computer is connected to it with a normal Ethernet cable. I always manually set my IP on the computer and use the Google DNS servers for name resolution. I also tried OpenDNS and the result is the same. With that configuration the following happens: Google search responds with big delay Gmail, Google Calendar & Google Drive requests time out the majority of the time In order to troubleshoot, I set the network connection to DHCP for both IP & DNS. At that point, what happens is the following: Google search times out most of the time Gmail, Google Calendar & Google Drive work most of the time Sometimes, it happens that the sites that time out will come up, but weirdly enough, the pictures on the buttons will be missing. For instance, the magnifying glass on Google will be gone or the circle arrow on Gmail (but all buttons of course). All other websites load just fine - and very quickly. All other network functionality is completely unimpacted. The behaviour of fixed IP & Google DNS vs automatic IP & DNS is easily reproducible. I am going crazy trying to fix this as I have no idea what the hell is going on at this point. Here a list of the things I have tried thus far: Flushed the DNS Tried on different browsers (Works fine on my laptop by the way) Tried disabling Teredo & IPv6 stack Emptied all caches Checked the HOSTS file Rebooted the router Reset the router Reinstalled the network adapter Tracert displayes normal route until timing out at one point Ping usually doesn't work for the unreachable sites either Ran both complete Norton 360 & Kaspersky 2012 scan Ran Kaspersky Virus Removal Tool in safe mode Tried connection in safe mode & networking enabled If you have any ideas, please let me know. I'm getting desperate...

    Read the article

  • Fedora 17 keeps using fedora 16 kernel

    - by MTilsted
    I did run preupgrade to upgrade my Fedora 16(x64) to Fedora 17. And it seemed to work fine. So I got the new gimp 2.8, gcc 4.7.0 and so on. But the system keeps using the old kernel from fc16. Uname -a gives me: Linux localhost.localdomain 3.3.6-3.fc16.x86_64 #1 SMP Wed May 16 21:43:01 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux The system downloaded the new kernel, so I got /boot/vmlinuz-3.3.7-1.fc17.x86_64 /boot/System.map-3.3.7-1.fc17.x86_64 /boot/initramfs-3.3.7-1.fc17.x86_64.img /boot/config-3.3.7-1.fc17.x86_64 But the system keeps using the old kernel from fc16. If i look at my /boot/grub2/grub.cfg file, it looks like this: # # DO NOT EDIT THIS FILE # # It is automatically generated by grub2-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } set timeout=5 ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/10_linux ### menuentry 'Fedora (3.3.6-3.fc16.x86_64)' --class fedora --class gnu-linux --class gnu --class os { load_video set gfxpayload=keep insmod gzio insmod part_gpt insmod ext2 set root='(hd0,gpt2)' search --no-floppy --fs-uuid --set=root 3521a578-5829-4fb4-a485-8c097df77d07 echo 'Loading Fedora (3.3.6-3.fc16.x86_64)' linux /vmlinuz-3.3.6-3.fc16.x86_64 root=UUID=57459a16-97a0-46a4-8e71-cc3ec0ca4a3e ro KEYTABLE=dvorak rd.lvm=0 rd.dm=0 quiet SYSFONT=latarcyrheb-sun16 rhgb rd.md.uuid=60956781:734d95ba:424311e2:796702a7 rd.luks=0 LANG=en_US.UTF-8 echo 'Loading initial ramdisk ...' initrd /initramfs-3.3.6-3.fc16.x86_64.img } menuentry 'Fedora (3.3.5-2.fc16.x86_64)' --class fedora --class gnu-linux --class gnu --class os { load_video set gfxpayload=keep insmod gzio insmod part_gpt insmod ext2 set root='(hd0,gpt2)' search --no-floppy --fs-uuid --set=root 3521a578-5829-4fb4-a485-8c097df77d07 echo 'Loading Fedora (3.3.5-2.fc16.x86_64)' linux /vmlinuz-3.3.5-2.fc16.x86_64 root=UUID=57459a16-97a0-46a4-8e71-cc3ec0ca4a3e ro KEYTABLE=dvorak rd.lvm=0 rd.dm=0 quiet SYSFONT=latarcyrheb-sun16 rhgb rd.md.uuid=60956781:734d95ba:424311e2:796702a7 rd.luks=0 LANG=en_US.UTF-8 echo 'Loading initial ramdisk ...' initrd /initramfs-3.3.5-2.fc16.x86_64.img } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/30_os-prober ### ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### ### BEGIN /etc/grub.d/90_persistent ### ### END /etc/grub.d/90_persistent ### Anyone got a clue about why it still only references the fc16 kernel, and how I can upgrade it. My system is using raid1 on 2 disks, but /boot is not using raid. Mount for /boot is: /dev/sda2 on /boot type ext2 (rw,relatime,seclabel,user_xattr,acl,barrier=1) And / (The only other filesystem I have) is mounted as /dev/md0 on / type ext4 (rw,relatime,seclabel,user_xattr,acl,barrier=1,data=ordered)

    Read the article

  • Slow Windows Explorer on Windows 7

    - by MadBoy
    I have Laptop with i7 (4 cores), 8GB ram and SSD OCZ Vertex 3 MaxIOPS which in testing that I did just now does 400mb/s+ read/write. However the responsiveness of Windows Explorer is far from being perfect. Opening up Computer, Documents, going into folders is very slow (1-5seconds). I don't have any viruses or spyware and I have tried changing properties to optimize view for General Items. I tried disabling Search Indexer but it made search in Outlook 2010 crawl and didn't bring any other effect. Even double clicking on file takes some time to open things up (like clicking a Word document). I don't have any drives mapped, my computer is not joined to domain. I have multiple VPN connections that I connect to but they all have disabled default gateways. I tried using CC Cleaner or some Windows 7 Tweaks app to disable some things. I am power user using Visual Studio, Tortoise SVN and other developer/administration apps. Any non obvious ideas? Edit: So I've been trying to pinpoint where the issue comes from and it seems that straight after reboot Windows Explorer opens very fast, when I load 3-4 programs (Royal TS, Visual Studio, Outlook) it's noticeably slower and the more programs I have it gets worse. After I start closing programs it starts working better and if I leave 2 open it's fast again. I tried doing some research with DiskMon and other programs from sysinternals but couldn't find anything suspicious. Below are stats during normal usage with a lots of programs open: - Ram usage with a lot of programs open and no swap file (i disabled it for testing): 6.95GB - CPU usage: 15%, none of the cores takes more then 50% (I have VS 2010 open x 4) HD Tune Pro: OCZ-VERTEX3 MI Benchmark Test capacity: full Read transfer rate Transfer Rate Minimum : 363.9 MB/s Transfer Rate Maximum : 505.5 MB/s Transfer Rate Average : Access Time : Burst Rate : CPU Usage : HD Tune Pro: OCZ-VERTEX3 MI File Benchmark Drive C: Transfer rate test File Size: 500 MB Sequential read 484102 KB/s Sequential write 444714 KB/s Random read 7779 IOPS Random write 16888 IOPS Random read (queue depth = 32) 73007 IOPS Random write (queue depth = 32) 69790 IOPS HD Tune Pro: OCZ-VERTEX3 MI Random Access Test capacity: full Read test Transfer size operations / sec avg. access time max. access time avg. speed 512 bytes 3260 IOPS 0.306 ms 2.106 ms 1.592 MB/s 4 KB 4161 IOPS 0.240 ms 2.006 ms 16.256 MB/s 64 KB 2382 IOPS 0.419 ms 2.367 ms 148.934 MB/s 1 MB 449 IOPS 2.225 ms 4.197 ms 449.407 MB/s Random 809 IOPS 1.235 ms 6.551 ms 410.527 MB/s HD Tune Pro: OCZ-VERTEX3 MI Extra Tests Test capacity: full Random seek 3975 IOPS 0.252 ms 1.941 MB/s Random seek 4 KB 4245 IOPS 0.236 ms 16.583 MB/s Butterfly seek 4086 IOPS 0.245 ms 1.995 MB/s Random seek / size 64 KB 3812 IOPS 0.262 ms 58.606 MB/s Random seek / size 8 MB 120 IOPS 8.348 ms 485.737 MB/s Sequential outer 4524 IOPS 0.221 ms 282.721 MB/s Sequential middle 4429 IOPS 0.226 ms 276.818 MB/s Sequential inner 5504 IOPS 0.182 ms 344.000 MB/s Burst rate 4472 IOPS 0.224 ms 279.475 MB/s

    Read the article

  • Disable error_log. Error_log flooding

    - by user36646
    Hello, i got an webserver running and old version of gambio (xt:commerce fork). The error_log in the dir over the public_html is flooding with errors. About 30mb in 15min. How can I disable this log? I can't fix all the errors. Here are a few examples of the errors: [warn] mod_fcgid: stderr: PHP Notice: Undefined variable: key in /usr/www/users/foo//includes/classes/class.inputfilter.php on line 98 [warn] mod_fcgid: stderr: PHP Notice: Undefined index: in /usr/www/users/foo/templ [warn] mod_fcgid: stderr: in /usr/www/users/foo/templates/gambio/source/inc/xtc_show_category_sectionc.inc.php on line 47 They are all errors of: "mod_fcgid: stderr". I tried to grep "error_log" and "error_report" in the public html dir, but i did not find anything. Here is a part from the phpinfo(): PHP Version 4.4.9 System Linux foobar.com 2.6.26-2-686-bigmem #1 SMP Sat Dec 26 09:26:36 UTC 2009 i686 Build Date Feb 11 2010 13:00:33 Configure Command './configure' '--prefix=/usr/local/php4' '--with-config-file-path=/etc/php4/cgi' '--with-gd' '--with-jpeg-dir' '--with-png-dir' '--with-tiff-dir' '--with-ttf' '--enable-force-cgi-redirect' '--enable-safe-mode' '--with-zlib' '--enable-ftp' '--enable-url-includes' '--enable-gd-native-ttf' '--enable-trans-sid' '--enable-dbase' '--with-db4' '--with-ldap' '--enable-bcmath' '--enable-calendar' '--enable-memory-limit' '--with-mcal=/usr' '--with-bz2' '--with-mod-dav' '--enable-sockets' '--with-kerberos' '--with-imap-ssl' '--enable-gd-imgstrttf' '--with-freetype-dir' '--with-curl' '--with-mysql' '--with-mhash' '--with-gdbm' '--with-pgsql' '--with-gettext' '--with-xml' '--with-mcrypt' '--with-openssl' '--with-dom' '--without-pear' '--enable-exif' '--with-zip' '--enable-wddx' '--disable-cli' '--enable-fastcgi' '--with-imap' '--enable-xslt' '--with-xslt-sablot=/usr/local/lib' '--enable-mbstring' '--with-dom-xslt' '--with-dom-exslt' Server API CGI/FastCGI Virtual Directory Support disabled Configuration File (php.ini) Path /home/httpd/php-ini/foo/php.ini PHP API 20020918 PHP Extension 20020429 Zend Extension 20050606 Debug Build no Zend Memory Manager enabled Thread Safety disabled Registered PHP Streams php, http, ftp, https, ftps, compress.bzip2, compress.zlib **Configuration PHP Core** Directive Local Value Master Value allow_call_time_pass_reference On On allow_url_fopen Off Off always_populate_raw_post_data Off Off arg_separator.input & & arg_separator.output & & asp_tags Off Off auto_append_file no value no value auto_prepend_file no value no value browscap no value no value default_charset no value no value default_mimetype text/html text/html define_syslog_variables Off Off disable_classes no value no value disable_functions no value no value display_errors On On display_startup_errors Off Off doc_root no value no value docref_ext no value no value docref_root no value no value enable_dl On On error_append_string no value no value error_log no value no value error_prepend_string no value no value error_reporting 2039 2039 expose_php On On extension_dir /usr/local/php4/lib/php/extensions/no-debug-non-zts-20020429 /usr/local/php4/lib/php/extensions/no-debug-non-zts-20020429 file_uploads On On gpc_order GPC GPC highlight.bg #FFFFFF #FFFFFF highlight.comment #FF8000 #FF8000 highlight.default #0000BB #0000BB highlight.html #000000 #000000 highlight.keyword #007700 #007700 highlight.string #DD0000 #DD0000 html_errors On On ignore_repeated_errors Off Off ignore_repeated_source Off Off ignore_user_abort Off Off implicit_flush Off Off include_path .:/usr/local/lib/php/ .:/usr/local/lib/php/ log_errors Off Off log_errors_max_len 1024 1024 magic_quotes_gpc On On magic_quotes_runtime Off Off magic_quotes_sybase Off Off max_execution_time 120 120 max_input_nesting_level 500 500 max_input_time -1 -1 memory_limit 128000000 128000000 open_basedir /usr/www/users/foo:/usr/home/foo:/tmp:/usr/local/lib/php:/usr/local/rmagic:/usr/www/users/he/_system_ /usr/www/users/foo:/usr/home/foo:/tmp:/usr/local/lib/php:/usr/local/rmagic:/usr/www/users/he/_system_ output_buffering no value no value output_handler no value no value post_max_size 128000000 128000000 precision 14 14 register_argc_argv On On register_globals Off Off report_memleaks On On safe_mode Off Off safe_mode_exec_dir no value no value safe_mode_gid Off Off safe_mode_include_dir no value no value sendmail_from no value no value sendmail_path /usr/sbin/sendmail -t /usr/sbin/sendmail -t serialize_precision 100 100 short_open_tag On On SMTP localhost localhost smtp_port 25 25 sql.safe_mode Off Off track_errors Off Off unserialize_callback_func no value no value upload_max_filesize 128000000 128000000 upload_tmp_dir /usr/foo/foo/.tmp /usr/foo/.tmp user_dir no value no value variables_order EGPCS EGPCS xmlrpc_error_number 0 0 xmlrpc_errors Off Off y2k_compliance Off Off

    Read the article

  • ubuntu 8.04lts + rdiff-backup: Should I install from source instead of using apt repositories?

    - by egarcia
    I'm trying to use rdiff-backup in order to make backup copies of some folders inside an Ubuntu 8.04LTS server. I'm attempting to do the backup on another server with a more modern Ubuntu distro (9.10). I'll call this one the "client". rdiff-backup needs to be installed on both the client and the server. It is available on the apt repositories on both machines, so I installed it using sudo apt-get install rdiff-backup. The problem is that the version installed on the server is older than the one on the client (1.1.15 vs 1.2.8). Thus I get errors when I try do make them work together. So I need both versions to be the same. What is the standard procedure in these cases? Should I attempt to upgrade the version on the server, or downgrade the version on the client? And how whould I do that? In case it is useful, I'd like to point out that the rdiff-backup apt-package has some dependencies - librsync1 & python-support Attaching the errors I got in case they help: rdiff-backup egarcia@test::/var/rails/ohwr/backup /home/kikito/backup/files Warning: Local version 1.2.8 does not match remote version 1.1.15. Exception ' Warning Security Violation! Bad request for function: rpath.make_file_dict with arguments: ['/var/rails/ohwr/backup'] ' raised of class '<class 'rdiff_backup.Security.Violation'>': File "/usr/lib/pymodules/python2.6/rdiff_backup/Main.py", line 304, in error_check_Main try: Main(arglist) File "/usr/lib/pymodules/python2.6/rdiff_backup/Main.py", line 321, in Main rps = map(SetConnections.cmdpair2rp, cmdpairs) File "/usr/lib/pymodules/python2.6/rdiff_backup/SetConnections.py", line 78, in cmdpair2rp return rpath.RPath(conn, filename).normalize() File "/usr/lib/pymodules/python2.6/rdiff_backup/rpath.py", line 884, in __init__ else: self.setdata() File "/usr/lib/pymodules/python2.6/rdiff_backup/rpath.py", line 908, in setdata self.data = self.conn.rpath.make_file_dict(self.path) File "/usr/lib/pymodules/python2.6/rdiff_backup/connection.py", line 450, in __call__ return apply(self.connection.reval, (self.name,) + args) File "/usr/lib/pymodules/python2.6/rdiff_backup/connection.py", line 370, in reval if isinstance(result, Exception): raise result Traceback (most recent call last): File "/usr/bin/rdiff-backup", line 30, in <module> rdiff_backup.Main.error_check_Main(sys.argv[1:]) File "/usr/lib/pymodules/python2.6/rdiff_backup/Main.py", line 304, in error_check_Main try: Main(arglist) File "/usr/lib/pymodules/python2.6/rdiff_backup/Main.py", line 321, in Main rps = map(SetConnections.cmdpair2rp, cmdpairs) File "/usr/lib/pymodules/python2.6/rdiff_backup/SetConnections.py", line 78, in cmdpair2rp return rpath.RPath(conn, filename).normalize() File "/usr/lib/pymodules/python2.6/rdiff_backup/rpath.py", line 884, in __init__ else: self.setdata() File "/usr/lib/pymodules/python2.6/rdiff_backup/rpath.py", line 908, in setdata self.data = self.conn.rpath.make_file_dict(self.path) File "/usr/lib/pymodules/python2.6/rdiff_backup/connection.py", line 450, in __call__ return apply(self.connection.reval, (self.name,) + args) File "/usr/lib/pymodules/python2.6/rdiff_backup/connection.py", line 370, in reval if isinstance(result, Exception): raise result rdiff_backup.Security.Violation: Warning Security Violation! Bad request for function: rpath.make_file_dict with arguments: ['/var/rails/ohwr/backup']

    Read the article

  • IE and Google Chrome timeout on an IIS6 hosted SSL page that Firefox handles well.

    - by Thomas
    Ok, here's the scenario: Up until a few weeks ago, none of us noticed anything wrong with the corporate website. People were using it without complaint. Then, a client complained that a specific page on the site was timing out for him, and only when he committed a POST action on a form filled with data. I checked it out, and it timed out for me, too. But, it only timed out in Google Chrome and IE, not in Firefox. Additionally, the same page, on the same server, but served from a different domain name (one not under the protection of SSL, either) does not time out under any browser. To clarify: https://www.mysite.com/changes.php times out on POST, but the same with http works fine. That distinction (SSL vs. Non-SSL) seems to be important, as nothing else has changed. Our certificate is valid, and Firefox detects no errors thrown by the page. I've looked at the Request and Response headers from the page, and they all follow the correct formats. Then, after wandering through the site, I noticed a few other things. Both IE and Chrome will frequently time out on any page that is PHP-based. They never time out on static images or html files. I've looked at the site from a variety of different servers, my home and work workstations, and my netbook. Because of that, I've discounted a viral infection, as I highly doubt a virus is going to hit every one of the machines to which I have access in exactly the same manner. My setup is: Server: Win2k3, II6, PHP 5.2.9-1. Clients: IE7, IE8, Chrome (regular and dev channel): Frequent timeouts on PHP pages. Firefox 2, Firefox 3: No timeouts. Firebug shows no errors or even lengthy periods serving the pages. I've spent 2 days searching for any tech knowledge that I can find, and my search parameters are all too general. Everyone has problems loading SSL pages in IE and Chrome for a wide variety of reasons. The infrequent nature of the timeouts and the fact that there are no errors being reported anywhere is starting to drive me insane. Does anyone have any insight on a problem like this?

    Read the article

  • CPU Lockup when loading folder of bookmarks in Firefox

    - by Gary M. Mugford
    I am running Firefox 3.6 on WinXPSP3 on a duo Core machine with 4G of memory. I am also running Avast! free anti-virus and ZoneAlarm free firewall, both latest versions. Within the last month, my service provider basically forced me to upgrade to a Docsis 3.0 compliant modem (offered me a deal I couldn't turn down). At that point, I also upgraded to FF3.6. Basically, I am not unhappy with many aspects of this switchover, EXCEPT, when I now load a folder of bookmarks (anywhere from 10 to 38) I get nothing like the load times I experienced a couple of months back. It's taking minutes rather than seconds. And the first bookmark in one of the folders, GMail, rarely loads before timing out. I have used the old trick of powering off the cable modem before my day's work. This used to fix 'load-lag' in the old days. I have switched from my ISP's DNS server to Google, OpenDNS and back. And nothing seems to work currently. It's not my DNS cache. That's been flushed and secondary computers also have the same issue when loading folders. I have watched the CPU usage and loading the folder will send VSMon (ZoneAlarm) usage over 40 percent, AVastSvc (Avast!) over 30 and Firefox will then push the needle to 100. There's a brief burst by SVCHost when the others falter in devouring cycles. Then everything subsides to single digits once the last tab is loaded. The only other nominal nastiness is VSMon ALWAYS hitting 50 percent when ANY program starts downloading content from the internet. If I shutdown ZoneAlarm (and VSMon with it), the same slow loading takes place, but this time System is running 50% plus, again driving the usage to 100 per cent. I have my doubts FF3.6 vs FF3.5 is an issue, since the other computers are still running 3.5 and suffer the same issue. Those computers are on, but inactive, most of the time, being backups. Obviously, when the CPU hits 100, I can't do much of anything in FireFox OR in other programs. Video play through WMPC or VLC is extremely choppy, although it doesn't seem to affect the audio. Any ideas what I can try next? Thanks, GM

    Read the article

< Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >