Search Results

Search found 3563 results on 143 pages for 'templates'.

Page 129/143 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • xstream handles non-english character

    - by Yan Cheng CHEOK
    I have the following code : /* * To change this template, choose Tools | Templates * and open the template in the editor. */ package helloworld; import com.thoughtworks.xstream.XStream; import java.io.File; import java.io.FileOutputStream; import java.io.InputStream; import java.io.OutputStream; import javax.swing.JOptionPane; /** * * @author yccheok */ public class Test { @SuppressWarnings("unchecked") public static <A> A fromXML(Class c, File file) { XStream xStream = new XStream(); InputStream inputStream = null; try { inputStream = new java.io.FileInputStream(file); Object object = xStream.fromXML(inputStream); if (c.isInstance(object)) { return (A)object; } } catch (Exception exp) { exp.printStackTrace(); } finally { if (inputStream != null) { try { inputStream.close(); inputStream = null; } catch (java.io.IOException exp) { exp.printStackTrace(); return null; } } } return null; } @SuppressWarnings("unchecked") public static <A> A fromXML(Class c, String filePath) { return (A)fromXML(c, new File(filePath)); } public static boolean toXML(Object object, File file) { XStream xStream = new XStream(); OutputStream outputStream = null; try { outputStream = new FileOutputStream(file); xStream.toXML(object, outputStream); } catch (Exception exp) { exp.printStackTrace(); return false; } finally { if (outputStream != null) { try { outputStream.close(); outputStream = null; } catch (java.io.IOException exp) { exp.printStackTrace(); return false; } } } return true; } public static boolean toXML(Object object, String filePath) { return toXML(object, new File(filePath)); } public static void main(String args[]) { String s = "\u6210\u4EA4\u91CF"; // print ??? System.out.println(s); // fine! show ??? JOptionPane.showMessageDialog(null, s); toXML(s, "C:\\A.XML"); String o = fromXML(String.class, "C:\\A.XML"); // show ??? JOptionPane.showMessageDialog(null, o); } } I run the following code through command prompt in Windows Vista. 1) May I know why System.out.println unable to print out Chinese Character in console? 2) I open up the xstream file. The saved value is <string>???</string> How can I make xstream save Chinese Character correctly? Thanks.

    Read the article

  • Route Angular to New Controller after Login

    - by MizAkita
    I'm kind of stuck on how to route my angular app to a new controller after login. I have a simple app, that uses 'loginservice'... after logging in, it then routes to /home which has a different template from the index.html(login page). I want to use /home as the route that displays the partial views of my flightforms controllers. What is the best way to configure my routes so that after login, /home is the default and the routes are called into that particular templates view. Seems easy but I keep getting the /login page when i click on a link which is suppose to pass the partial view into the default.html template: var app= angular.module('myApp', ['ngRoute']); app.config(['$routeProvider', function($routeProvider) { $routeProvider.when('/login', { templateUrl: 'partials/login.html', controller: 'loginCtrl' }); $routeProvider.when('/home', { templateUrl: 'partials/default.html', controller: 'defaultCtrl' }); }]); flightforms.config(['$routeProvider', function($routeProvider){ //sub pages $routeProvider.when('/home', { templateUrl: 'partials/default.html', controller: 'defaultCtrl' }); $routeProvider.when('/status', { templateUrl: 'partials/subpages/home.html', controller: 'statusCtrl' }); $routeProvider.when('/observer-ao', { templateUrl: 'partials/subpages/aobsrv.html', controller: 'obsvaoCtrl' }); $routeProvider.when('/dispatch', { templateUrl: 'partials/subpages/disp.html', controller: 'dispatchCtrl' }); $routeProvider.when('/fieldmgr', { templateUrl: 'partials/subpages/fieldopmgr.html', controller: 'fieldmgrCtrl' }); $routeProvider.when('/obs-backoffice', { templateUrl: 'partials/subpages/obsbkoff.html', controller: 'obsbkoffCtrl' }); $routeProvider.when('/add-user', { templateUrl: 'partials/subpages/users.html', controller: 'userCtrl' }); $routeProvider.otherwise({ redirectTo: '/status' }); }]); app.run(function($rootScope, $location, loginService) { var routespermission=['/home']; //route that require login $rootScope.$on('$routeChangeStart', function(){ if( routespermission.indexOf($location.path()) !=-1) { var connected=loginService.islogged(); connected.then(function(msg) { if(!msg.data) $location.path('/login'); }); } }); }); and my controllers are simple. Here's a sample of what they look like: var flightformsControllers = angular.module('flightformsController', []); flightforms.controller('fieldmgrCtrl', ['$scope','$http','loginService', function($scope,loginService) { $scope.txt='You are logged in'; $scope.logout=function(){ loginService.logout(); } }]); Any ideas on how to get my partials to display in the /home default.html template would be appreciated.

    Read the article

  • Php plugin to replace '->' with '.' as the member access operator ? Or even better: alternative synt

    - by Gigi
    Present day usable solution: Note that if you use an ide or an advanced editor, you could make a code template, or record a macro that inserts '->' when you press Ctrl and '.' or something. Netbeans has macros, and I have recorded a macro for this, and I like it a lot :) (just click the red circle toolbar button (start record macro),then type -> into the editor (thats all the macro will do, insert the arrow into the editor), then click the gray square (stop record macro) and assign the 'Ctrl dot' shortcut to it, or whatever shortcut you like) The php plugin: The php plugin, would also have to have a different string concatenation operator than the dot. Maybe a double dot ? Yea... why not. All it has to do is set an activation tag so that it doesnt replace / interpreter '.' as '->' for old scripts and scripts that dont intent do use this. Something like this: <php+ $obj.i = 5 ?> (notice the modified '<?php' tag to '<?php+' ) This way it wouldnt break old code. (and you can just add the '<?php+' code template to your editor and then type 'php tab' (for netbeans) and it would insert '<?php+' ) With the alternative syntax method you could even have old and new syntax cohabitating on the same page like this (I am illustrating this to show the great compatibility of this method, not because you would want to do this): <?php+ $obj.i = 5; ?> <?php $obj->str = 'a' . 'b'; ?> You could change the tag to something more explanatory, in case somebody who doesnt know about the plugin reads the script and thinks its a syntax error <?php-dot.com $obj.i = 5; ?> This is easy because most editors have code templates, so its easy to assign a shortcut to it. And whoever doesnt want the dot replacement, doesnt have to use it. These are NOT ultimate solutions, they are ONLY examples to show that solutions exist, and that arguments against replacing '->' with '.' are only excuses. (Just admit you like the arrow, its ok : ) With this potential method, nobody who doesnt want to use it would have to use it, and it wouldnt break old code. And if other problems (ahem... excuses) arise, they could be fixed too. So who can, and who will do such a thing ?

    Read the article

  • SWF (using XML) only working locally, not on home server or web hosting server

    - by Andy
    Hi, I use a main SWF file, which has some animations. It uses xml from a .php file which specifies several items, e.g. images and other SWFs to be used in the main SWF. Locally everything works perfectly, but when invoking it via my home server, or hosting provider it doesn't work anymore and I don't get why. All links are relative and correct. Somehow the main SWF doesn't load fully, or has problems with the XML from the .php file. I'm not sure, now I only get a black box that doesn't show any of the other content it's supposed to. check it out: http://deoshermes.ath.cx/cc-common/templates/dynamiclead/dynamic_leadee.swf the XML: <?xml version="1.0" ?><dynamic_content> <item blurb="Text 1" content_url="" content_source="" content_timer="8000" content_target="_self" tab_color="0x000000" tab_border_color="0x000000" tab_arrow_color="0xFFFFFF" tab_text_color="0xFFFFFF" tab_image="/template/images/dle_TOPmay06.jpg" cycle="true" content_border_color="0x" content_bg_image="" tab_hl_color="0x000000" tab_highlight_color="0x" tab_highlight_text_color="0x" tab_highlight_image="" > </item> <item blurb="Text 2" content_timer="5000" cycle="true" content_border_color="0x" content_bg_image="" tab_hl_color="0xFFFFFF" tab_border_color="0xFFFFFF" tab_color="0xFFFFFF" tab_arrow_color="0xFFFFFF" tab_text_color="0xFFFFFF" tab_image="/template/images/dle_MIDandBOTmay06.jpg" tab_highlight_color="0x" tab_highlight_text_color="0x" tab_highlight_image="" content_url="" content_source="" content_target="_self" > </item> <item blurb="Text 3" content_timer="5000" cycle="true" content_border_color="0x" content_bg_image="" tab_hl_color="0xFFFFFF" tab_border_color="0xFFFFFF" tab_color="0xFFFFFF" tab_arrow_color="0xFFFFFF" tab_text_color="0xFFFFFF" tab_image="/template/images/dle_MIDandBOTmay06.jpg" tab_highlight_color="0x" tab_highlight_text_color="0x" tab_highlight_image="" content_url="" content_source="" content_target="_self" > </item> </dynamic_content> this works like a charm when invoking the main SWF locally. The ActionScript from the main sWF can be found at samedomain as above/Actionscript_mainmovie.txt This also seems to work great. the function formattabs (line 68) uses some javascript. Locally the main SWF functions even without this hbx file which is located /cc-common/wss/hbx.js and use in the webpage actually. I haven't got a clue what's keeping the main SWF from working properly, because all other single SWFs work properly when invoked using a direct link. And this one just isn't working... Do I maybe need to add something in the php.ini file?? Any help would be appreciated!

    Read the article

  • web design PSD to html -> more direct ways?

    - by Assembler
    At work I see one colleague designing a site in Photoshop/Fireworks, I see another taking this data, slicing it up and using Dreamweaver to rebuild the same from scratch. It seems like too much mucking around! I know that Photoshop can output a tables based HTML, and Fireworks will create divs with absolute positioning; neither appear to be very helpful. Admittedly, I haven't tried much of (DW/FW) (CS4/CS3) since becoming a programmer, so I don't know if new versions are addressing this work flow issue, but are we still double handling things? Can we attach some sort of layout metadata (this is a rollover button, this will be a SWF, this will be text, this logo will hide "xyz" <h1> text etc) to slices to aid in layout generation? are there some secret tools which assist in this conversion process? Or are we still restricted to doing things by hand? The frustration continues when said hand built page needs to be reworked again to fit Smarty Templates/Wordpress/generic CMS. I acknowledge that designers need to be free of systems to be able to do whatever, but most conventional sites have: a header with navigation a sidebar with more links the main content part maybe another sidebar a footer Given the similarity of a lot of components, shouldn't there be a more systematic approach to going from sliced designs to functional HTML? Or am I over-simplifying things? -edit- Mmmmm.... I suppose I will accept an answer, but they weren't really what I was looking for. It just seems like designing the DOM is a bit of holy grail ("It's only a model!"), and maybe with all the "groovy" things you can do with HTML and Javascript, it would be mighty hard work, but with a set of constraints (that 960 stuff looks interesting), some well designed reset style sheets and a bit of... fairy dust? we should be able to improve the work flow. Photoshop's tables by themselves are pretty much useless, I agree, but surely we can take this data, and then select a group of cells and say "right, this is a text div, overflow:auto" or "these cells are an image block, style it with the same height/width as the selected area". Admittedly here at work there are other elephants in the room that need to make their formal introductions to management, but some parts of the designpage workflow seem... uneducated at best.

    Read the article

  • [jQuery] JS inside the template

    - by Martin Trigaux
    Hello, I'm trying to include some javascript code inside a template. The code of my html page : <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html;charset=utf-8" /> <script type="text/javascript" src="jquery-1.4.2.min.js"></script> <script type="text/javascript" src="jquery-jtemplates.js"></script> </head> <body> <div id="infos"></div> <div id="my_template"></div> <script type="text/javascript"> $(document).ready(function() { $('#my_template').setTemplateURL("my_template.html"); try { $('#my_template').processTemplate({'var' : 1}); } catch (e) { $('#infos').html('error : '+e); } $('#my_button').click(function(){ alert('it works outside'); }); }); </script> </body> </html> and the template content of the template<br/> {#if $T.var == 1} <script type="text/javascript"> $(document).ready(function() { $('#my_button').click(function(){ alert('it works inside'); }); }); </script> <input type='submit' id='my_button' value='click me' onclick='alert("direct");'/> {#else} not working {#/if} produce me an error inside the infos balise error : SyntaxError: missing } after function body if I just put alert('it works inside'); inside the script balise (remove all the jquery related code), the page load, the two message "direct" and "it works outside" are showed but not "it works inside" message. It's suppose to works as said on the doc page Allow to use JavaScript code in templates Thank you

    Read the article

  • Dynamic Dispatch without Virtual Functions

    - by Kristopher Johnson
    I've got some legacy code that, instead of virtual functions, uses a kind field to do dynamic dispatch. It looks something like this: // Base struct shared by all subtypes // Plain-old data; can't use virtual functions struct POD { int kind; int GetFoo(); int GetBar(); int GetBaz(); int GetXyzzy(); }; enum Kind { Kind_Derived1, Kind_Derived2, Kind_Derived3 }; struct Derived1: POD { Derived1(): kind(Kind_Derived1) {} int GetFoo(); int GetBar(); int GetBaz(); int GetXyzzy(); // plus other type-specific data and function members }; struct Derived2: POD { Derived2(): kind(Kind_Derived2) {} int GetFoo(); int GetBar(); int GetBaz(); int GetXyzzy(); // plus other type-specific data and function members }; struct Derived3: POD { Derived3(): kind(Kind_Derived3) {} int GetFoo(); int GetBar(); int GetBaz(); int GetXyzzy(); // plus other type-specific data and function members }; and then the POD class's function members are implemented like this: int POD::GetFoo() { // Call kind-specific function switch (kind) { case Kind_Derived1: { Derived1 *pDerived1 = static_cast<Derived1*>(this); return pDerived1->GetFoo(); } case Kind_Derived2: { Derived2 *pDerived2 = static_cast<Derived2*>(this); return pDerived2->GetFoo(); } case Kind_Derived3: { Derived3 *pDerived3 = static_cast<Derived3*>(this); return pDerived3->GetFoo(); } default: throw UnknownKindException(kind, "GetFoo"); } } POD::GetBar(), POD::GetBaz(), POD::GetXyzzy(), and other members are implemented similarly. This example is simplified. The actual code has about a dozen different subtypes of POD, and a couple dozen methods. New subtypes of POD and new methods are added pretty frequently, and so every time we do that, we have to update all these switch statements. The typical way to handle this would be to declare the function members virtual in the POD class, but we can't do that because the objects reside in shared memory. There is a lot of code that depends on these structs being plain-old-data, so even if I could figure out some way to have virtual functions in shared-memory objects, I wouldn't want to do that. So, I'm looking for suggestions as to the best way to clean this up so that all the knowledge of how to call the subtype methods is centralized in one place, rather than scattered among a couple dozen switch statements in a couple dozen functions. What occurs to me is that I can create some sort of adapter class that wraps a POD and uses templates to minimize the redundancy. But before I start down that path, I'd like to know how others have dealt with this.

    Read the article

  • Is it any loose coupling mechanism in Objective-C + Cocoa like C# delegates or C++Qt signals+slots?

    - by Eye of Hell
    Hello. For a large programs, the standard way to chalenge a complexity is to divide a program code into small objects. Most of the actual programming languages offer this functionality via classes, so is Objective-C. But after source code is separated into small object, the second challenge is to somehow connect them with each over. Standard approaches, supported by most languages are compositon (one object is a member field of another), inheritance, templates (generics) and callbacks. More cryptic techniques include method-level delagates (C#) and signals+slots (C++Qt). I like the delegates / signals idea, since while connecting two objects i can connect individual methods with each over, without objects knowing anything of each over. For C#, it will look like this: var object1 = new CObject1(); var object2 = new CObject2(); object1.SomethingHappened += object2.HandleSomething; In this code, is object1 calls it's SomethingHappened delegate (like a normal method call) the HandleSomething method of object2 will be called. For C++Qt, it will look like this: var object1 = new CObject1(); var object2 = new CObject2(); connect( object1, SIGNAL(SomethingHappened()), object2, SLOT(HandleSomething()) ); The result will be exactly the same. This technique has some advantages and disadvantages, but generally i like it more than interfaces since if program code base grows i can change connections and add new ones without creating tons of interfaces. After examination of Objective-C i havn't found any way to use this technique i like :(. It seems that Objective-C supports message passing perfectly well, but it requres for object1 to have a pointer to object2 in order to pass it a message. If some object needs to be connected to lots of other objects, in Objective-C i will be forced to give him pointers to each of the objects it must be connected. So, the question :). Is it any approach in Objective-C programming that will closely resemble delegate / signal+slot types of connection, not a 'give first object an entire pointer to second object so it can pass a message to it'. Method-level connections are a bit more preferable to me than object-level connection ^_^.

    Read the article

  • FILE_NOT_FOUND when trying to open COM port C++

    - by Moutabreath
    I am trying to open a com port for reading and writing using C++ but I can't seem to pass the first stage of actually opening it. I get an INVALID_HANDLE_VALUE on the handle with GetLastError FILE_NOT_FOUND. I have searched around the web for a couple of days I'm fresh out of ideas. I have searched through all the questions regarding COM on this website too. I have scanned through the existing ports (or so I believe) to get the name of the port right. I also tried combinations of _T("COM1") with the slashes, without the slashes, with colon, without colon and without the _T I'm using windows 7 on 64 bit machine. this is the code i got I'll be glad for any input on this void SendToCom(char* data, int len) { DWORD cbNeeded = 0; DWORD dwPorts = 0; EnumPorts(NULL, 1, NULL, 0, &cbNeeded, &dwPorts); //What will be the return value BOOL bSuccess = FALSE; LPCSTR COM1 ; BYTE* pPorts = static_cast<BYTE*>(malloc(cbNeeded)); bSuccess = EnumPorts(NULL, 1, pPorts, cbNeeded, &cbNeeded, &dwPorts); if (bSuccess){ PORT_INFO_1* pPortInfo = reinterpret_cast<PORT_INFO_1*>(pPorts); for (DWORD i=0; i<dwPorts; i++) { //If it looks like "COMX" then size_t nLen = _tcslen(pPortInfo->pName); if (nLen > 3) { if ((_tcsnicmp(pPortInfo->pName, _T("COM"), 3) == 0) ){ COM1 =pPortInfo->pName; //COM1 ="\\\\.\\COM1"; HANDLE m_hCommPort = CreateFile( COM1 , GENERIC_READ|GENERIC_WRITE, // access ( read and write) 0, // (share) 0:cannot share the COM port NULL, // security (None) OPEN_EXISTING, // creation : open_existing FILE_FLAG_OVERLAPPED, // we want overlapped operation NULL // no templates file for COM port... ); if (m_hCommPort==INVALID_HANDLE_VALUE) { DWORD err = GetLastError(); if (err == ERROR_FILE_NOT_FOUND) { MessageBox(hWnd,"ERROR_FILE_NOT_FOUND",NULL,MB_ABORTRETRYIGNORE); } else if(err == ERROR_INVALID_NAME) { MessageBox(hWnd,"ERROR_INVALID_NAME",NULL,MB_ABORTRETRYIGNORE); } else { MessageBox(hWnd,"unkown error",NULL,MB_ABORTRETRYIGNORE); } } else{ WriteAndReadPort(m_hCommPort,data); } } pPortInfo++; } } } }

    Read the article

  • jquery ajax form success callback not being called

    - by Michael Merchant
    I'm trying to upload a file using "AJAX", process data in the file and then return some of that data to the UI so I can dynamically update the screen. I'm using the JQuery Ajax Form Plugin, jquery.form.js found at http://jquery.malsup.com/form/ for the javascript and using Django on the back end. The form is being submitted and the processing on the back end is going through without a problem, but when a response is received from the server, my Firefox browser prompts me to download/open a file of type "application/json". The file has the json content that I've been trying to send to the browser. I don't believe this is an issue with how I'm sending the json as I have a modularized json_wrapper() function that I'm using in multiple places in this same application. Here is what my form looks after Django templates are applied: <form method="POST" enctype="multipart/form-data" action="/test_suites/active/upload_results/805/"> <p> <label for="id_resultfile">Upload File:</label> <input type="file" id="id_resultfile" name="resultfile"> </p> </form> You won't see any submit buttons because I'm calling submit with a button else where and am using ajaxSubmit() from the jquery.form.js plugin. Here is the controlling javascript code: function upload_results($dialog_box){ $form = $dialog_box.find("form"); var options = { type: "POST", success: function(data){ alert("Hello!!"); }, dataType: "json", error: function(){ console.log("errors"); }, beforeSubmit: function(formData, jqForm, options){ console.log(formData, jqForm, options); }, } $form.submit(function(){ $(this).ajaxSubmit(options); return false; }); $form.ajaxSubmit(options); } As you can see, I've gotten desperate to see the success callback function work and simply have an alert message created on success. However, we never reach that call. Also, the error function is not called and the beforeSubmit function is executed. The file that I get back has the following contents: {"count": 18, "failed": 0, "completed": 18, "success": true, "trasaction_id": "SQEID0.231"} I use 'success' here to denote whether or not the server was able to run the post command adequately. If it failed the result would look something like: {"success": false, "message":"<error_message>"} Your time and help is greatly appreciated. I've spent a few days on this now and would love to move on.

    Read the article

  • Closing a process that open in code

    - by AmirHossein
    I create a WordTemplate with some placeholders for field,in code I insert value in this placeholders and show it to user. protected void Button1_Click(object sender, EventArgs e) { string DocFilePath = ""; //string FilePath = System.Windows.Forms.Application.StartupPath; object fileName = @"[...]\asset\word templates\FormatPeygiri1.dot"; DocFilePath = fileName.ToString(); FileInfo fi = new FileInfo(DocFilePath); if (fi.Exists) { object readOnly = false; object isVisible = true; object PaperNO = "PaperNO"; object PaperDate = "PaperDate"; object Peyvast = "Peyvast"; object To = "To"; object ShoName = "ShoName"; object DateName = "DateName"; Microsoft.Office.Interop.Word.Document aDoc = WordApp.Documents.Open(ref fileName, ref missing, ref readOnly, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref isVisible, ref isVisible, ref missing, ref missing, ref missing); WordApp.ActiveDocument.FormFields.get_Item(ref PaperNO).Result = TextBox_PaperNO.Text; string strPaperDate = string.Format("{0}/{1}/{2}", PersianDateTimeHelper.GetPersainDay(DateTimePicker_PaperDate.SelectedDate), PersianDateTimeHelper.GetPersainMonth(DateTimePicker_PaperDate.SelectedDate), PersianDateTimeHelper.GetPersainYear(DateTimePicker_PaperDate.SelectedDate)); WordApp.ActiveDocument.FormFields.get_Item(ref PaperDate).Result = strPaperDate; WordApp.ActiveDocument.FormFields.get_Item(ref Peyvast).Result = TextBox_Peyvast.Text; WordApp.ActiveDocument.FormFields.get_Item(ref To).Result = TextBox_To.Text; ; WordApp.ActiveDocument.FormFields.get_Item(ref ShoName).Result = TextBox_ShoName.Text; string strDateName = string.Format("{0}/{1}/{2}", PersianDateTimeHelper.GetPersainDay(DateTimePicker_DateName.SelectedDate), PersianDateTimeHelper.GetPersainMonth(DateTimePicker_DateName.SelectedDate), PersianDateTimeHelper.GetPersainYear(DateTimePicker_DateName.SelectedDate)); WordApp.ActiveDocument.FormFields.get_Item(ref DateName).Result = strDateName; aDoc.Activate(); WordApp.Visible = true; aDoc = null; WordApp = null; } else { MessageBox1.Show("File Not Exist!"); } it work good and successfully! but when a user close the Word,her Process not closed and exists in Task Manager Process list. this process name is WINWORD.exe I know that I can close process whit code [process.Kill()] but I don't know which process that I should to kill. if I want to kill all process with name [WINWORD.exe] all Word window closed.but I want to close specific Word window and kill process that I opened. How to do it?

    Read the article

  • Database design advice needed.

    - by user346271
    Hi all, I'm a lone developer for a telecoms company, and am after some database design advice from anyone with a bit of time to answer. I am inserting into one table ~2 million rows each day, these tables then get archived and compressed on a monthly basis. Each monthly table contains ~15,000,000 rows. Although this is increasing month on month. For every insert I do above I am combining the data from rows which belong together and creating another "correlated" table. This table is currently not being archived, as I need to make sure I never miss an update to the correlated table. (Hope that makes sense) Although in general this information should remain fairly static after a couple of days of processing. All of the above is working perfectly. However my company now wishes to perform some stats against this data, and these tables are getting too large to provide the results in what would be deemed a reasonable time. Even with the appropriate indexes set. So I guess after all the above my question is quite simple. Should I write a script which groups the data from my correlated table into smaller tables. Or should I store the queries result sets in something like memcache? I'm already using mysqls cache, but due to having limited control over how long the data is stored for, it's not working ideally. The main advantages I can see of using something like memcache: No blocking on my correlated table after the query has been cashed. Greater flexibility of sharing the collected data between the backend collector and front end processor. (i.e custom reports could be written in the backend and the results of these stored in the cache under a key which then gets shared with anyone who would want to see the data of this report) Redundancy and scalability if we start sharing this data with a large amount of customers. The main disadvantages I can see of using something like memcache: Data is not persistent if machine is rebooted / cache is flushed. The main advantages of using MySql Persistent data. Less code changes (although adding something like memcache is trivial anyway) The main disadvantages of using MySql Have to define table templates every time I want to store provide a new set of grouped data. Have to write a program which loops through the correlated data and fills these new tables. Potentially will still grow slower as the data continues to be filled. Apologies for quite a long question. It's helped me to write down these thoughts here anyway, and any advice/help/experience with dealing with this sort of problem would be greatly appreciated. Many thanks. Alan

    Read the article

  • PHP create page as a string after PHP runs

    - by John
    I'm stuck on how to write the test.php page result (after php has run) to a string: testFunctions.php: <?php function htmlify($html, $format){ if ($format == "print"){ $html = str_replace("<", "&lt;", $html); $html = str_replace(">", "&gt;", $html); $html = str_replace("&nbsp;", "&amp;nbsp;", $html); $html = nl2br($html); return $html; } }; $input = <<<HTML <div style="background color:#959595; width:400px;"> &nbsp;<br> input <b>text</b> <br>&nbsp; </div> HTML; function content($input, $mode){ if ($mode =="display"){ return $input; } else if ($mode =="source"){ return htmlify($input, "print"); }; }; function pagePrint($page){ $a = array( 'file_get_contents' => array($page), 'htmlify' => array($page, "print") ); foreach($a as $func=>$args){ $x = call_user_func_array($func, $args); $page .= $x; } return $page; }; $file = "test.php"; ?> test.php: <?php include "testFunctions.php"; ?> <br><hr>here is the rendered html:<hr> <?php $a = content($input, "display"); echo $a; ?> <br><hr>here is the source code:<hr> <?php $a = content($input, "source"); echo $a; ?> <br><hr>here is the source code of the entire page after the php has been executed:<hr> <div style="margin-left:40px; background-color:#ebebeb;"> <?php $a = pagePrint($file); echo $a; ?> </div> I'd like to keep all the php in the testFunctions.php file, so I can place simple function calls into templates for html emails. Thanks!

    Read the article

  • Replace text in XSL using wildcards

    - by JosephThomas
    This is similar to an earlier problem I was having which you guys solved in less than a day. I am working with XML files that are generated by a digital video camera. The camera allows the user to save all of the camera's settngs to an SD card so that the settings can be recalled or loaded into another camera. The XSL stylesheet I am writing will allow users to view the camera's settings, as saved to the SD card in a web browser. While most of the values in the XML file -- as formatted by my stylesheet -- make sense to humans, some do not. What I would like to do is have the stylesheet display text that is based on the value in the XML file but more easily understood by humans. A typical value that can be written to the XML file is "_23_970" which represents the camera's frame rate. This would be better displayed as 23.970 (or 023.970). The first underscore is a sort of place holder to make a space for values over 099.999. The second underscore, obviously represents the decimal. My previous (similar) question involved replacing predictable text, and the solution was matching templates. In this case, however, the camera can be set at any one of 119,999 frame rates (I think I did that math correctly). The approach, I would guess, is to pass a value to the displayed webpage that keeps the numeric values (each digit), replaces the second underscore with a decimal, and replaces the first underscore with either an nbsp or a zero (whichever is easier). If the first character in the string is a "1" (the camera can run at frame rates up to 120.000) then the one should be passed on to the page displayed by the stylesheet. I have read other posts here regarding wildcards, but couldn't find one that answered this question. EDIT: Sorry for leaving out important info. I fared better on my first try at asking a question! I guess I got complacent. Anyhow . . . I should have shown you the code that displays the text in the XSL file as is: <tr> <xsl:for-each select="Settings/Groups/Recording"> <tr><td class="title_column">Frame Rate</td><td><xsl:value-of select="RecOutLinkSpeed"/></td></tr> </xsl:for-each> </tr> I should also have given you the URL for the sample file I have been working with: http://josephthomas.info/Alexa/Setup_120511_140322.xml

    Read the article

  • Template class + virtual function = must implement?

    - by sold
    This code: template <typename T> struct A { T t; void DoSomething() { t.SomeFunction(); } }; struct B { }; A<B> a; is easily compiled without any complaints, as long as I never call a.DoSomething(). However, if I define DoSomething as a virtual function, I will get a compile error saying that B doesn't declare SomeFunction. I can somewhat see why it happens (DoSomething should now have an entry in the vtable), but I can't help feeling that it's not really obligated. Plus it sucks. Is there any way to overcome this? EDIT 2: Okay. I hope this time it makes sence: Let's say I am doing intrusive ref count, so all entities must inherit from base class Object. How can I suuport primitive types too? I can define: template <typename T> class Primitive : public Object { T value; public: Primitive(const T &value=T()); operator T() const; Primitive<T> &operator =(const T &value); Primitive<T> &operator +=(const T &value); Primitive<T> &operator %=(const T &value); // And so on... }; so I can use Primitive<int>, Primitive<char>... But how about Primitive<float>? It seems like a problem, because floats don't have a %= operator. But actually, it isn't, since I'll never call operator %= on Primitive<float>. That's one of the deliberate features of templates. If, for some reason, I would define operator %= as virtual. Or, if i'll pre-export Primitive<float> from a dll to avoid link errors, the compiler will complain even if I never call operator %= on a Primitive<float>. If it would just have fill in a dummy value for operator %= in Primitive<float>'s vtable (that raises an exception?), everything would have been fine.

    Read the article

  • C++ stream as a parameter when overloading operator<<

    - by TheOm3ga
    I'm trying to write my own logging class and use it as a stream: logger L; L << "whatever" << std::endl; This is the code I started with: #include <iostream> using namespace std; class logger{ public: template <typename T> friend logger& operator <<(logger& log, const T& value); }; template <typename T> logger& operator <<(logger& log, T const & value) { // Here I'd output the values to a file and stdout, etc. cout << value; return log; } int main(int argc, char *argv[]) { logger L; L << "hello" << '\n' ; // This works L << "bye" << "alo" << endl; // This doesn't work return 0; } But I was getting an error when trying to compile, saying that there was no definition for operator<<: pruebaLog.cpp:31: error: no match for ‘operator<<’ in ‘operator<< [with T = char [4]](((logger&)((logger*)operator<< [with T = char [4]](((logger&)(& L)), ((const char (&)[4])"bye")))), ((const char (&)[4])"alo")) << std::endl’ So, I've been trying to overload operator<< to accept this kind of streams, but it's driving me mad. I don't know how to do it. I've been loking at, for instance, the definition of std::endl at the ostream header file and written a function with this header: logger& operator <<(logger& log, const basic_ostream<char,char_traits<char> >& (*s)(basic_ostream<char,char_traits<char> >&)) But no luck. I've tried the same using templates instead of directly using char, and also tried simply using "const ostream& os", and nothing. Another thing that bugs me is that, in the error output, the first argument for operator<< changes, sometimes it's a reference to a pointer, sometimes looks like a double reference...

    Read the article

  • How to give properties to c++ classes (interfaces)

    - by caas
    Hello, I have built several classes (A, B, C...) which perform operations on the same BaseClass. Example: struct BaseClass { int method1(); int method2(); int method3(); } struct A { int methodA(BaseClass& bc) { return bc.method1(); } } struct B { int methodB(BaseClass& bc) { return bc.method2()+bc.method1(); } } struct C { int methodC(BaseClass& bc) { return bc.method3()+bc.method2(); } } But as you can see, each class A, B, C... only uses a subset of the available methods of the BaseClass and I'd like to split the BaseClass into several chunks such that it is clear what it used and what is not. For example a solution could be to use multiple inheritance: // A uses only method1() struct InterfaceA { virtual int method1() = 0; } struct A { int methodA(InterfaceA&); } // B uses method1() and method2() struct InterfaceB { virtual int method1() = 0; virtual int method2() = 0; } struct B { int methodB(InterfaceB&); } // C uses method2() and method3() struct InterfaceC { virtual int method2() = 0; virtual int method3() = 0; } struct C { int methodC(InterfaceC&); } The problem is that each time I add a new type of operation, I need to change the implementation of BaseClass. For example: // D uses method1() and method3() struct InterfaceD { virtual int method1() = 0; virtual int method3() = 0; } struct D { int methodD(InterfaceD&); } struct BaseClass : public A, B, C // here I need to add class D { ... } Do you know a clean way I can do this? Thanks for your help edit: I forgot to mention that it can also be done with templates. But I don't like this solution either because the required interface does not appear explicitly in the code. You have to try to compile the code to verify that all required methods are implemented correctly. Plus, it would require to instantiate different versions of the classes (one for each BaseClass type template parameter) and this is not always possible nor desired.

    Read the article

  • c++ template function compiles in header but not implementation

    - by flies
    I'm trying to learn templates and I've run into this confounding error. I'm declaring some functions in a header file and I want to make a separate implementation file where the functions will be defined. Here's the code that calls the header (dum.cpp): #include <iostream> #include <vector> #include <string> #include "dumper2.h" int main() { std::vector<int> v; for (int i=0; i<10; i++) { v.push_back(i); } test(); std::string s = ", "; dumpVector(v,s); } now, here's a working header file (dumper2.h): #include <iostream> #include <string> #include <vector> void test(); template <class T> void dumpVector( std::vector<T> v,std::string sep); template <class T> void dumpVector(std::vector<T> v, std::string sep) { typename std::vector<T>::iterator vi; vi = v.begin(); std::cout << *vi; vi++; for (;vi<v.end();vi++) { std::cout << sep << *vi ; } std::cout << "\n"; return; } with implentation (dumper2.cpp): #include <iostream> #include "dumper2.h" void test() { std::cout << "!olleh dlrow\n"; } the weird thing is that if I move the code that defines dumpVector from the .h to the .cpp file, I get the following error: g++ -c dumper2.cpp -Wall -Wno-deprecated g++ dum.cpp -o dum dumper2.o -Wall -Wno-deprecated /tmp/ccKD2e3G.o: In function `main': dum.cpp:(.text+0xce): undefined reference to `void dumpVector<int>(std::vector<int, std::allocator<int> >, std::basic_string<char, std::char_traits<char>, std::allocator<char> >)' collect2: ld returned 1 exit status make: *** [dum] Error 1 So why does it work one way and not the other? Clearly the compiler can find test(), so why can't it find dumpVector?

    Read the article

  • Slow login to load-balanced Terminal Server 2008 behind Gateway Server

    - by Frans
    I have a small load-balanced (using Session Broker) Terminal Server 2008 farm behind a Gateway Server which is accessed from the Internet. The problem I have is that there is a delay of 20-30 seconds if the session broker switches the user to another server during login. I think this is related to the fact that I am forcing the security layer to be RDP rather than SSL. The background The Gateway server has a public routeable IP addres and DNS name so it can be accessed from the Internet and all users come in via this route (the system is used to provide access to hosted applications to external customers). The actual terminal servers only have internal IP addresses. This works really well, except that with a Vista or Windows 7 client, the Remote Desktop client will negotiate with the server to use SSL for the security layer. This then exposes the auto-generated certificate that TS1 or TS2 has - but since they are internal, auto-generated certificates, the client will get a stern warning that the certificate is not valid. I can't give the servers a properly authorised certificate as the servers do not have public routeable IP address or DNS name. Instead, I am using Group Policy to force the connections to be over RDP instead of SSL. \Computer Configuration\Policies\Administrative Templates\Windows Components\Terminal Services\Terminal Server\Security\Require use of specific security layer for remote (RDP) connections The Windows 7 user now gets a much less stern warning that "the server's identity cannot be confirmed" which I can live with. I don't have enough control over the end-user's machines to ask them to install a new root certificate either. TS1 and TS2 are also load-balanced using the Session Broker, which is installed on the Gateway Server. I am using round-robin DNS, so the user's initial connection will go via Gateway1 to either TS1 or TS2. TS1/TS2 will then talk to the session broker and may pass the user to the other server. I.e. the user may get connected to TS2, but after talking to the session broker the user may be passed to TS1, which is where they will run their session. When this switching of servers happens, in my setup, the screen sits with the word "Welcome" for 20-30 seconds after which it flickers, Welcome is shown again and then flashing through nthe normal login screens (i.e. "wait for user profile manager" etc). Having done some research, I think what is happening is that the user is being fully logged on to TS2 (while "Welcome" is shown) before being passed to TS1, where they are then logged in again. It is interesting that normally when you see the ""Welcome" word, the little circle to left rotates. However, it does not rotate during this delay - the screen just looks frozen. This blog post leads me to think that this is because CredSSP is not being used, probably because I am disallowing SSL and forcing RDP. What I have tried I enabled SSL again which removes the "Welcome" delay. However, it seems to introduc a new delay much earlier in the process. Specifically, when the RDP client is saying "initialising connection" - this is now much slower. Quite apart from the fact that my certificate problem precludes me using that solution without considerable difficulty. I tried disabling the load balancing (just remove the servers from the session broker farm) and the connections do not have any delay. The problem is also intermittent in the sense that it only happens when the user gets bumped from one server to another. I tested this by trying to connect directly to TS1 (via the Gateway, of course) and then checking which server I actually got connected to. Just to be sure, I also by-passed the round-robin DNS to see if it had any impact and it doesn't. The setup is essentially in line with MS recommendations here: TS Session Broker Load Balancing Step-by-Step Guide I tried changing to using a dedicated redirector. Basically, rather than using a round-robin DNS, I pointed my DNS to the Gateway server and configured it to be a dedicated redirector (disallow logons, add it to the farm). Same problem, alas. Any ideas or suggestions gratefully received.

    Read the article

  • Sign an OpenSSL .CSR with Microsoft Certificate Authority

    - by kce
    I'm in the process of building a Debian FreeRadius server that does 802.1x authentication for domain members. I would like to sign my radius server's SSL certificate (used for EAP-TLS) and leverage the domain's existing PKI. The radius server is joined to domain via Samba and has a machine account as displayed in Active Directory Users and Computers. The domain controller I'm trying to sign my radius server's key against does not have IIS installed so I can't use the preferred Certsrv webpage to generate the certificate. The MMC tools won't work as it can't access the certificate stores on the radius server because they don't exist. This leaves the certreq.exe utility. I'm generating my .CSR with the following command: openssl req -nodes -newkey rsa:1024 -keyout server.key -out server.csr The resulting .CSR: ******@mis-ke-lnx:~/G$ openssl req -text -noout -in mis-radius-lnx.csr Certificate Request: Data: Version: 0 (0x0) Subject: C=US, ST=Alaska, L=CITY, O=ORG, OU=DEPT, CN=ME/emailAddress=MYEMAIL Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (1024 bit) Modulus (1024 bit): 00:a8:b3:0d:4b:3f:fa:a4:5f:78:0c:24:24:23:ac: cf:c5:28:af:af:a2:9b:07:23:67:4c:77:b5:e8:8a: 08:2e:c5:a3:37:e1:05:53:41:f3:4b:e1:56:44:d2: 27:c6:90:df:ae:3b:79:e4:20:c2:e4:d1:3e:22:df: 03:60:08:b7:f0:6b:39:4d:b4:5e:15:f7:1d:90:e8: 46:10:28:38:6a:62:c2:39:80:5a:92:73:37:85:37: d3:3e:57:55:b8:93:a3:43:ac:2b:de:0f:f8:ab:44: 13:8e:48:29:d7:8d:ce:e2:1d:2a:b7:2b:9d:88:ea: 79:64:3f:9a:7b:90:13:87:63 Exponent: 65537 (0x10001) Attributes: a0:00 Signature Algorithm: sha1WithRSAEncryption 35:57:3a:ec:82:fc:0a:8b:90:9a:11:6b:56:e7:a8:e4:91:df: 73:1a:59:d6:5f:90:07:83:46:aa:55:54:1c:f9:28:3e:a6:42: 48:0d:6b:da:58:e4:f5:7f:81:ee:e2:66:71:78:85:bd:7f:6d: 02:b6:9c:32:ad:fa:1f:53:0a:b4:38:25:65:c2:e4:37:00:16: 53:d2:da:f2:ad:cb:92:2b:58:15:f4:ea:02:1c:a3:1c:1f:59: 4b:0f:6c:53:70:ef:47:60:b6:87:c7:2c:39:85:d8:54:84:a1: b4:67:f0:d3:32:f4:8e:b3:76:04:a8:65:48:58:ad:3a:d2:c9: 3d:63 I'm trying to submit my certificate using the following certreq.exe command: certreq -submit -attrib "CertificateTemplate:Machine" server.csr I receive the following error upon doing so: RequestId: 601 Certificate not issued (Denied) Denied by Policy Module The DNS name is unavailable and cannot be added to the Subject Alternate name. 0x8009480f (-2146875377) Certificate Request Processor: The DNS name is unavailable and cannot be added to the Subject Alternate name. 0x8009480f (-2146875377) Denied by Policy Module My certificate authority has the following certificate templates available. If I try to submit by certreq.exe using "CertificiateTemplate:Computer" instead of "CertificateTemplate:Machine" I get an error reporting that "the requested certificate template is not supported by this CA." My google-foo has failed me so far on trying to understand this error... I feel like this should be a relatively simple task as X.509 is X.509 and OpenSSL generates the .CSRs in the required PKCS10 format. I can't be only one out there trying to sign a OpenSSL generated key on a Linux box with a Windows Certificate Authority, so how do I do this (perferably using the off-line certreq.exe tool)?

    Read the article

  • 502 Bad Gateway - nginx

    - by ADH2
    I am randomly receiving 502 Bad Gateway error pages - I can reproduce this issue by modifying hosting plans in plesk 11 and in the same time refreshing a page for a minute or two. When I get the 502 error page all I have to do is refresh the browser and the page refreshes properly. i am using centos 6 this it from todays log (/var/log/nginx/error.log): 2012/12/04 10:52:07 [error] 21272#0: *545 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 82.77.68.111, server: likeit-craiova.ro, request: "GET / HTTP/1.1", upstream: "http://195.254.135.113:7080/", host: "likeit-craiova.ro" this is the nginx config (/etc/nginx/nginx.conf) #user nginx; worker_processes 1; #error_log /var/log/nginx/error.log; #error_log /var/log/nginx/error.log notice; #error_log /var/log/nginx/error.log info; #pid /var/run/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #tcp_nodelay on; #gzip on; #gzip_disable "MSIE [1-6]\.(?!.*SV1)"; server_tokens off; include /etc/nginx/conf.d/*.conf; } fastcgi config file (/etc/nginx/fastcgi.conf): fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param HTTPS $https if_not_empty; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; fastcgi parameters config (/etc/nginx/fastcgi_params): fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param HTTPS $https if_not_empty; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; alsow i'm getting this on a shared hosting server, on one of the domains: Unable to generate the web server configuration file on the host because of the following errors: nginx: [warn] duplicate MIME type "text/html" in /etc/nginx/nginx.conf:45 nginx: [emerg] open() "/var/www/vhosts/partydayandnight.ro/statistics/logs/proxy_access_log" failed (24: Too many open files) nginx: configuration file /etc/nginx/nginx.conf test failed Please resolve the errors in web server configuration templates and generate the file again. why is this appearing and what troubles may it cause? what can i do to get this errors fixed? thank you!

    Read the article

  • Fedora 17 keeps using fedora 16 kernel

    - by MTilsted
    I did run preupgrade to upgrade my Fedora 16(x64) to Fedora 17. And it seemed to work fine. So I got the new gimp 2.8, gcc 4.7.0 and so on. But the system keeps using the old kernel from fc16. Uname -a gives me: Linux localhost.localdomain 3.3.6-3.fc16.x86_64 #1 SMP Wed May 16 21:43:01 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux The system downloaded the new kernel, so I got /boot/vmlinuz-3.3.7-1.fc17.x86_64 /boot/System.map-3.3.7-1.fc17.x86_64 /boot/initramfs-3.3.7-1.fc17.x86_64.img /boot/config-3.3.7-1.fc17.x86_64 But the system keeps using the old kernel from fc16. If i look at my /boot/grub2/grub.cfg file, it looks like this: # # DO NOT EDIT THIS FILE # # It is automatically generated by grub2-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } set timeout=5 ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/10_linux ### menuentry 'Fedora (3.3.6-3.fc16.x86_64)' --class fedora --class gnu-linux --class gnu --class os { load_video set gfxpayload=keep insmod gzio insmod part_gpt insmod ext2 set root='(hd0,gpt2)' search --no-floppy --fs-uuid --set=root 3521a578-5829-4fb4-a485-8c097df77d07 echo 'Loading Fedora (3.3.6-3.fc16.x86_64)' linux /vmlinuz-3.3.6-3.fc16.x86_64 root=UUID=57459a16-97a0-46a4-8e71-cc3ec0ca4a3e ro KEYTABLE=dvorak rd.lvm=0 rd.dm=0 quiet SYSFONT=latarcyrheb-sun16 rhgb rd.md.uuid=60956781:734d95ba:424311e2:796702a7 rd.luks=0 LANG=en_US.UTF-8 echo 'Loading initial ramdisk ...' initrd /initramfs-3.3.6-3.fc16.x86_64.img } menuentry 'Fedora (3.3.5-2.fc16.x86_64)' --class fedora --class gnu-linux --class gnu --class os { load_video set gfxpayload=keep insmod gzio insmod part_gpt insmod ext2 set root='(hd0,gpt2)' search --no-floppy --fs-uuid --set=root 3521a578-5829-4fb4-a485-8c097df77d07 echo 'Loading Fedora (3.3.5-2.fc16.x86_64)' linux /vmlinuz-3.3.5-2.fc16.x86_64 root=UUID=57459a16-97a0-46a4-8e71-cc3ec0ca4a3e ro KEYTABLE=dvorak rd.lvm=0 rd.dm=0 quiet SYSFONT=latarcyrheb-sun16 rhgb rd.md.uuid=60956781:734d95ba:424311e2:796702a7 rd.luks=0 LANG=en_US.UTF-8 echo 'Loading initial ramdisk ...' initrd /initramfs-3.3.5-2.fc16.x86_64.img } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/30_os-prober ### ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### ### BEGIN /etc/grub.d/90_persistent ### ### END /etc/grub.d/90_persistent ### Anyone got a clue about why it still only references the fc16 kernel, and how I can upgrade it. My system is using raid1 on 2 disks, but /boot is not using raid. Mount for /boot is: /dev/sda2 on /boot type ext2 (rw,relatime,seclabel,user_xattr,acl,barrier=1) And / (The only other filesystem I have) is mounted as /dev/md0 on / type ext4 (rw,relatime,seclabel,user_xattr,acl,barrier=1,data=ordered)

    Read the article

  • Disable error_log. Error_log flooding

    - by user36646
    Hello, i got an webserver running and old version of gambio (xt:commerce fork). The error_log in the dir over the public_html is flooding with errors. About 30mb in 15min. How can I disable this log? I can't fix all the errors. Here are a few examples of the errors: [warn] mod_fcgid: stderr: PHP Notice: Undefined variable: key in /usr/www/users/foo//includes/classes/class.inputfilter.php on line 98 [warn] mod_fcgid: stderr: PHP Notice: Undefined index: in /usr/www/users/foo/templ [warn] mod_fcgid: stderr: in /usr/www/users/foo/templates/gambio/source/inc/xtc_show_category_sectionc.inc.php on line 47 They are all errors of: "mod_fcgid: stderr". I tried to grep "error_log" and "error_report" in the public html dir, but i did not find anything. Here is a part from the phpinfo(): PHP Version 4.4.9 System Linux foobar.com 2.6.26-2-686-bigmem #1 SMP Sat Dec 26 09:26:36 UTC 2009 i686 Build Date Feb 11 2010 13:00:33 Configure Command './configure' '--prefix=/usr/local/php4' '--with-config-file-path=/etc/php4/cgi' '--with-gd' '--with-jpeg-dir' '--with-png-dir' '--with-tiff-dir' '--with-ttf' '--enable-force-cgi-redirect' '--enable-safe-mode' '--with-zlib' '--enable-ftp' '--enable-url-includes' '--enable-gd-native-ttf' '--enable-trans-sid' '--enable-dbase' '--with-db4' '--with-ldap' '--enable-bcmath' '--enable-calendar' '--enable-memory-limit' '--with-mcal=/usr' '--with-bz2' '--with-mod-dav' '--enable-sockets' '--with-kerberos' '--with-imap-ssl' '--enable-gd-imgstrttf' '--with-freetype-dir' '--with-curl' '--with-mysql' '--with-mhash' '--with-gdbm' '--with-pgsql' '--with-gettext' '--with-xml' '--with-mcrypt' '--with-openssl' '--with-dom' '--without-pear' '--enable-exif' '--with-zip' '--enable-wddx' '--disable-cli' '--enable-fastcgi' '--with-imap' '--enable-xslt' '--with-xslt-sablot=/usr/local/lib' '--enable-mbstring' '--with-dom-xslt' '--with-dom-exslt' Server API CGI/FastCGI Virtual Directory Support disabled Configuration File (php.ini) Path /home/httpd/php-ini/foo/php.ini PHP API 20020918 PHP Extension 20020429 Zend Extension 20050606 Debug Build no Zend Memory Manager enabled Thread Safety disabled Registered PHP Streams php, http, ftp, https, ftps, compress.bzip2, compress.zlib **Configuration PHP Core** Directive Local Value Master Value allow_call_time_pass_reference On On allow_url_fopen Off Off always_populate_raw_post_data Off Off arg_separator.input & & arg_separator.output & & asp_tags Off Off auto_append_file no value no value auto_prepend_file no value no value browscap no value no value default_charset no value no value default_mimetype text/html text/html define_syslog_variables Off Off disable_classes no value no value disable_functions no value no value display_errors On On display_startup_errors Off Off doc_root no value no value docref_ext no value no value docref_root no value no value enable_dl On On error_append_string no value no value error_log no value no value error_prepend_string no value no value error_reporting 2039 2039 expose_php On On extension_dir /usr/local/php4/lib/php/extensions/no-debug-non-zts-20020429 /usr/local/php4/lib/php/extensions/no-debug-non-zts-20020429 file_uploads On On gpc_order GPC GPC highlight.bg #FFFFFF #FFFFFF highlight.comment #FF8000 #FF8000 highlight.default #0000BB #0000BB highlight.html #000000 #000000 highlight.keyword #007700 #007700 highlight.string #DD0000 #DD0000 html_errors On On ignore_repeated_errors Off Off ignore_repeated_source Off Off ignore_user_abort Off Off implicit_flush Off Off include_path .:/usr/local/lib/php/ .:/usr/local/lib/php/ log_errors Off Off log_errors_max_len 1024 1024 magic_quotes_gpc On On magic_quotes_runtime Off Off magic_quotes_sybase Off Off max_execution_time 120 120 max_input_nesting_level 500 500 max_input_time -1 -1 memory_limit 128000000 128000000 open_basedir /usr/www/users/foo:/usr/home/foo:/tmp:/usr/local/lib/php:/usr/local/rmagic:/usr/www/users/he/_system_ /usr/www/users/foo:/usr/home/foo:/tmp:/usr/local/lib/php:/usr/local/rmagic:/usr/www/users/he/_system_ output_buffering no value no value output_handler no value no value post_max_size 128000000 128000000 precision 14 14 register_argc_argv On On register_globals Off Off report_memleaks On On safe_mode Off Off safe_mode_exec_dir no value no value safe_mode_gid Off Off safe_mode_include_dir no value no value sendmail_from no value no value sendmail_path /usr/sbin/sendmail -t /usr/sbin/sendmail -t serialize_precision 100 100 short_open_tag On On SMTP localhost localhost smtp_port 25 25 sql.safe_mode Off Off track_errors Off Off unserialize_callback_func no value no value upload_max_filesize 128000000 128000000 upload_tmp_dir /usr/foo/foo/.tmp /usr/foo/.tmp user_dir no value no value variables_order EGPCS EGPCS xmlrpc_error_number 0 0 xmlrpc_errors Off Off y2k_compliance Off Off

    Read the article

  • Oracle Virtual Server OEL vm fails to start - kernel panic on cpu identify

    - by Towndrunk
    I am in the process of following a guide to setup various oracle vm templates, so far I have installed OVS 2. 2 and got the OVM Manager working, imported the template for OEL5U5 and created a vm from it.. the problem comes when starting that vm. The log in the OVMM console shows the following; Update VM Status - Running Configure CPU Cap Set CPU Cap: failed:<Exception: failed:<Exception: ['xm', 'sched-credit', '-d', '32_EM11g_OVM', '-c', '0'] => Error: Domain '32_EM11g_OVM' does not exist. StackTrace: File "/opt/ovs-agent-2.3/OVSXXenVMConfig.py", line 2531, in xen_set_cpu_cap run_cmd(args=['xm', File "/opt/ovs-agent-2.3/OVSCommons.py", line 92, in run_cmd raise Exception('%s => %s' % (args, err)) The xend.log shows; [2012-11-12 16:42:01 7581] DEBUG (DevController:139) Waiting for devices vtpm [2012-11-12 16:42:01 7581] INFO (XendDomain:1180) Domain 32_EM11g_OVM (3) unpaused. [2012-11-12 16:42:03 7581] WARNING (XendDomainInfo:1907) Domain has crashed: name=32_EM11g_OVM id=3. [2012-11-12 16:42:03 7581] ERROR (XendDomainInfo:2041) VM 32_EM11g_OVM restarting too fast (Elapsed time: 11.377262 seconds). Refusing to restart to avoid loops .> [2012-11-12 16:42:03 7581] DEBUG (XendDomainInfo:2757) XendDomainInfo.destroy: domid=3 [2012-11-12 16:42:12 7581] DEBUG (XendDomainInfo:2230) Destroying device model [2012-11-12 16:42:12 7581] INFO (image:553) 32_EM11g_OVM device model terminated I have set_on_crash="preserve" in the vm.cfg and have then run xm create -c to get the console screen while booting and this is the log of what happens.. Started domain 32_EM11g_OVM (id=4) Bootdata ok (command line is ro root=LABEL=/ ) Linux version 2.6.18-194.0.0.0.3.el5xen ([email protected]) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-48)) #1 SMP Mon Mar 29 18:27:00 EDT 2010 BIOS-provided physical RAM map: Xen: 0000000000000000 - 0000000180800000 (usable)> No mptable found. Built 1 zonelists. Total pages: 1574912 Kernel command line: ro root=LABEL=/ Initializing CPU#0 PID hash table entries: 4096 (order: 12, 32768 bytes) Xen reported: 1600.008 MHz processor. Console: colour dummy device 80x25 Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes) Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes) Software IO TLB disabled Memory: 6155256k/6299648k available (2514k kernel code, 135548k reserved, 1394k data, 184k init) Calibrating delay using timer specific routine.. 4006.42 BogoMIPS (lpj=8012858) Security Framework v1.0.0 initialized SELinux: Initializing. selinux_register_security: Registering secondary module capability Capability LSM initialized as secondary Mount-cache hash table entries: 256 CPU: L1 I Cache: 64K (64 bytes/line), D cache 16K (64 bytes/line) CPU: L2 Cache: 2048K (64 bytes/line) general protection fault: 0000 [1] SMP last sysfs file: CPU 0 Modules linked in: Pid: 0, comm: swapper Not tainted 2.6.18-194.0.0.0.3.el5xen #1 RIP: e030:[ffffffff80271280] [ffffffff80271280] identify_cpu+0x210/0x494 RSP: e02b:ffffffff80643f70 EFLAGS: 00010212 RAX: 0040401000810008 RBX: 0000000000000000 RCX: 00000000c001001f RDX: 0000000000404010 RSI: 0000000000000001 RDI: 0000000000000005 RBP: ffffffff8063e980 R08: 0000000000000025 R09: ffff8800019d1000 R10: 0000000000000026 R11: ffff88000102c400 R12: 0000000000000000 R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 FS: 0000000000000000(0000) GS:ffffffff805d2000(0000) knlGS:0000000000000000 CS: e033 DS: 0000 ES: 0000 Process swapper (pid: 0, threadinfo ffffffff80642000, task ffffffff804f4b80) Stack: 0000000000000000 ffffffff802d09bb ffffffff804f4b80 0000000000000000 0000000021100800 0000000000000000 0000000000000000 ffffffff8064cb00 0000000000000000 0000000000000000 Call Trace: [ffffffff802d09bb] kmem_cache_zalloc+0x62/0x80 [ffffffff8064cb00] start_kernel+0x210/0x224 [ffffffff8064c1e5] _sinittext+0x1e5/0x1eb Code: 0f 30 b8 73 00 00 00 f0 0f ab 45 08 e9 f0 00 00 00 48 89 ef RIP [ffffffff80271280] identify_cpu+0x210/0x494 RSP ffffffff80643f70 0 Kernel panic - not syncing: Fatal exception clear as mud to me. are there any other logs that will help me? I have now deployed another vm from the same template and used the default vm settings rather than adding more memory etc - I get exactly the same error.

    Read the article

  • Integrate SharePoint 2010 with Team Foundation Server 2010

    - by Martin Hinshelwood
    Our client is using a brand new shiny installation of SharePoint 2010, so we need to integrate our upgraded Team Foundation Server 2010 instance into it. In order to do that you need to run the Team Foundation Server 2010 install on the SharePoint 2010 server and choose to install only the “Extensions for SharePoint Products and Technologies”. We want out upgraded Team Project Collection to create any new portal in this SharePoint 2010 server farm. There a number of goodies above and beyond a solution file that requires the install, with the main one being the TFS2010 client API. These goodies allow proper integration with the creation and viewing of Work Items from SharePoint a new feature with TFS 2010. This works in both SharePoint 2007 and SharePoint 2010 with the level of integration dependant on the version of SharePoint that you are running. There are three levels of integration with “SharePoint Services 3.0” or “SharePoint Foundation 2010” being the lowest. This level only offers reporting services framed integration for reporting along with Work Item Integration and document management. The highest is Microsoft Office SharePoint Services (MOSS) Enterprise with Excel Services integration providing some lovely dashboards. Figure: Dashboards take the guessing out of Project Planning and estimation. Plus writing these reports would be boring!   The Extensions that you need are on the same installation media as the main TFS install and the only difference is the options you pick during the install. Figure: Installing the TFS 2010 Extensions for SharePoint Products and Technologies onto SharePoint 2010   Annoyingly you may need to reboot a couple of times, but on this server the process was MUCH smother than on our internal server. I think this was mostly to do with this being a clean install. Once it is installed you need to run the configuration. This will add all of the Solution and Templates that are needed for SharePoint to work properly with TFS. Figure: This is where all the TFS 2010 goodies are added to your SharePoint 2010 server and the TFS 2010 object model is installed.   Figure: All done, you have everything installed, but you still need to configure it Now that we have the TFS 2010 SharePoint Extensions installed on our SharePoint 2010 server we need to configure them both so that they will talk happily to each other. Configuring the SharePoint 2010 Managed path for Team Foundation Server 2010 In order for TFS to automatically create your project portals you need a wildcard managed path setup. This is where TFS will create the portal during the creation of a new Team project. To find the managed paths page for any application you need to first select the “Managed web applications”  link from the SharePoint 2010 Central Administration screen. Figure: Find the “Manage web applications” link under the “Application Management” section. On you are there you will see that the “Managed Paths” are there, they are just greyed out and selecting one of the applications will enable it to be clicked. Figure: You need to select an application for the SharePoint 2010 ribbon to activate.   Figure: You need to select an application before you can get to the Managed Paths for that application. Now we need to add a managed path for TFS 2010 to create its portals under. I have gone for the obvious option of just calling the managed path “TFS02” as the TFS 2010 server is the second TFS server that the client has installed, TFS 2008 being the first. This links the location to the server name, and as you can’t have two projects of the same name in two separate project collections there is unlikely to be any conflicts. Figure: Add a “tfs02” wildcard inclusion path to your SharePoint site. Configure the Team Foundation Server 2010 connection to SharePoint 2010 In order to have you new TFS 2010 Server talk to and create sites in SharePoint 2010 you need to tell the TFS server where to put them. As this TFS 2010 server was installed in out-of-the-box mode it has a SharePoint Services 3.0 (the free one) server running on the same box. But we want to change that so we can use the external SharePoint 2010 instance. Just open the “Team Foundation Server Administration Console” and navigate to the “SharePoint Web Applications” section. Here you click “Add” and enter the details for the Managed path we just created. Figure: If you have special permissions on your SharePoint you may need to add accounts to the “Service Accounts” section.    Before we can se this new SharePoint 2010 instance to be the default for our upgraded Team Project Collection we need to configure SharePoint to take instructions from our TFS server. Configure SharePoint 2010 to connect to Team Foundation Server 2010 On your SharePoint 2010 server open the Team Foundation Server Administration Console and select the “Extensions for SharePoint Products and Technologies” node. Here we need to “grant access” for our TFS 2010 server to create sites. Click the “Grant access” link and  fill out the full URL to the  TFS server, for example http://servername.domain.com:8080/tfs, and if need be restrict the path that TFS sites can be created on. Remember that when the users create a new team project they can change the default and point it anywhere they like as long as it is an authorised SharePoint location. Figure: Grant access for your TFS 2010 server to create sites in SharePoint 2010 Now that we have an authorised location for our team project portals to be created we need to tell our Team Project Collection that this is where it should stick sites by default for any new Team Projects created. Configure the Team Foundation Server 2010 Team Project Collection to create new sites in SharePoint 2010 Back on out TFS 2010 server we need to setup the defaults for our upgraded Team Project Collection to the new SharePoint 2010 integration we have just set up. On the TFS 2010 server open up the “Team Foundation Server Administration Console” again and navigate to the “Team Project Collections” node. Once you are there you will see a list of all of your TPC’s and in our case we have a DefaultCollection as well as out named and Upgraded collection for TFS 2008. If you select the “SharePoint Site” tab we can see that it is not currently configured. Figure: Our new Upgrade TFS2008 Team Project Collection does not have SharePoint configured Select to “Edit Default Site Location” and select the new integration point that we just set up for SharePoint 2010. Once you have selected the “SharePoint Web Application” (the thing we just configured) then it will give you an example based on that configuration point and the name of the Team Project Collection that we are configuring. Figure: Set the default location for new Team Project Portals to be created for this Team Project Collection This is where the reason for configuring the Extensions on the SharePoint 2010 server before doing this last bit becomes apparent. TFS 2010 is going to create a site at our http://sharepointserver/tfs02/ location called http://sharepointserver/tfs02/[TeamProjectCollection], or whatever we had specified, and it would have had difficulty doing this if we had not given it permission first. Figure: If there is no Team Project Collection site at this location the TFS 2010 server is going to create one This will create a nice Team Project Collection parent site to contain the Portals for any new Team Projects that are created. It is with noting that it will not create portals for existing Team Projects as this process is run during the Team Project Creation wizard. Figure: Just a basic parent site to host all of your new Team Project Portals as sub sites   You will need to add all of the users that will be creating Team Projects to be Administrators of this site so that they will not get an error during the Project Creation Wizard. You may also want to customise this as a proper portal to your projects if you are going to be having lots of them, but it is really just a default placeholder so you have a top level site that you can backup and point at. You have now integrated SharePoint 2010 and team Foundation Server 2010! You can now go forth and multiple your Team Projects for this Team Project Collection or you can continue to add portals to your other Collections.   Technorati Tags: TFS 2010,Sharepoint 2010,VS ALM

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >