Search Results

Search found 2110 results on 85 pages for 'priority'.

Page 77/85 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • What's wrong with this code? Values not saved to db

    - by Scott B
    Been trying to get this code to work for several days now to no avail. I'm at wits end. I've managed, with the code below, to create a customized category picker widget that appears on the PAGE editor. However, for the life of me, I cannot get the checked categories to save. function my_post_options_box() { if ( function_exists('add_meta_box') ) { //add_meta_box( $id, $title, $callback, $page, $context, $priority ); add_meta_box('categorydiv', __('Page Options'), 'post_categories_meta_box_modified', 'page', 'side', 'core'); } } //adds the custom categories box function post_categories_meta_box_modified($post) { $noindexCat = get_cat_ID('noindex'); $nofollowCat = get_cat_ID('nofollow'); if(in_category("noindex")){ $noindexChecked = " checked='checked'";} else {$noindexChecked = "";} if(in_category("nofollow")){ $nofollowChecked = " checked='checked'";} else {$noindexChecked = "";} ?> <div id="categories-all" class="ui-tabs-panel"> <ul id="categorychecklist" class="list:category categorychecklist form-no-clear"> <li id='category-<?php echo $noindexCat ?>' class="popular-category"><label class="selectit"><input value="<?php echo $noindexCat ?>" type="checkbox" name="post_category[]" id="in-category-<?php echo $noindexCat ?>"<?php echo $noindexChecked ?> /> noindex</label></li> <li id='category-<?php echo $nofollowCat ?>' class="popular-category"><label class="selectit"><input value="<?php echo $noindexCat ?>" type="checkbox" name="post_category[]" id="in-category-<?php echo $nofollowCat ?>"<?php echo $nofollowChecked ?> /> nofollow</label></li> <li id='category-1' class="popular-category"><label class="selectit"><input value="1" type="checkbox" name="post_category[]" id="in-category-1" checked="checked"/> Uncategorized</label></li> </ul> </div> <?php }

    Read the article

  • Audio/video streaming on Windows platform

    - by bushtucker
    I'm building an interactive language learning application to be used in a classroom environment. The idea is that a teacher should be able to talk to the students (=audio stream to all students), let students talk to each other (= audio P2P) in groups of two or more, let students watch a video coming from a the DVD player or coming from a media server. It should be possible to save the audio/video streams. The teacher should also be able to monitor, take-over or block the desktop of the students. The platform is Windows and it's a desktop application, no web application. The audio delay should be as minimal as poosible. Optionally a student sitting at home should be supported, but it's not a high priority. I am now finished with the classroom control part of the application (login, monitor, block, ...) and want to start the audio and video part. I've been evaluating several options like DirectX, GStreamer and SIP but now I have to make a decision. DirectX seems an obvious choice for the Windows platform, but it only lets me capture and playback audio and video. The encoding/decoding/network part I should do myself. GStreamer contains all kinds of options to capture/encode/stream/save audio and video streams. I've experimented a bit with it (ossbuild) and it does seem to involve a lot of trial and error to make something work: - microphone capture (via directsoundsrc) produces cracking noises on some computers - rtpL16 payloader didn't work well - streaming raw audio over the network only working at a sampling rate of 8000, no higher - there are a lot of errors when receiving mpeg4 video (bad I-frame), on some computers worse than others It is my impression that gstreamer is primary targetted at linux platforms. Development and support for the Windows platform seems to be a little behind. Nevertheless it's a powerful framework that could save me months and years of work. SIP seems to be able to do everything I want, but it is targeted towards telephony and IM. I don't know how flexible SIP is. It seems to me that the SIP layer would just be overhead as I already have a central (teacher) application that can control and setup all the streams. The interesting parts of frameworks like opalvoip and freeswitch are the actual audio/video capture, the encoding and transmission. Does anyone know how these interesting parts relate a framework like gstreamer? Are they easy to integrate into a custom application? Are they flexible enough? Does anyone have experience with all or one of these technologies? Maybe there are even other options I can look at? Many thanks for your advice

    Read the article

  • Exception showing a erroneous web page in a WPF frame

    - by H4mm3rHead
    I have a small application where i need to navigate to an url, I use this method to get the Frame: public override System.Windows.UIElement GetPage(System.Windows.UIElement container) { XmlDocument doc = new XmlDocument(); doc.Load(Location); string webSiteUrl = doc.SelectSingleNode("website").InnerText; Frame newFrame = new Frame(); if (!webSiteUrl.StartsWith("http://")) { webSiteUrl = "http://" + webSiteUrl; } newFrame.Source = new Uri(webSiteUrl); return newFrame; } My problem is now that the page im trying to show generates a error (or so i think), when i load the page in a browser it never fully loads, keeps saying "loading1 element" in the load bar and the green progress line (IE 8) keeps showing. When i attach my debugger i get this error: System.ArgumentException was unhandled Message="Parameter and value pair is not valid. Expected form is parameter=value." Source="WindowsBase" StackTrace: at MS.Internal.ContentType.ParseParameterAndValue(String parameterAndValue) at MS.Internal.ContentType..ctor(String contentType) at MS.Internal.WpfWebRequestHelper.GetContentType(WebResponse response) at System.Windows.Navigation.NavigationService.GetObjectFromResponse(WebRequest request, WebResponse response, Uri destinationUri, Object navState) at System.Windows.Navigation.NavigationService.HandleWebResponse(IAsyncResult ar) at System.Windows.Navigation.NavigationService.<>c__DisplayClassc.<HandleWebResponseOnRightDispatcher>b__8(Object unused) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Boolean isSingleParameter) at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) at System.Windows.Threading.DispatcherOperation.InvokeImpl() at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Windows.Threading.DispatcherOperation.Invoke() at System.Windows.Threading.Dispatcher.ProcessQueue() at System.Windows.Threading.Dispatcher.WndProcHook(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndWrapper.WndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndSubclass.DispatcherCallbackOperation(Object o) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Boolean isSingleParameter) at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) ved System.Windows.Threading.Dispatcher.InvokeImpl(DispatcherPriority priority, TimeSpan timeout, Delegate method, Object args, Boolean isSingleParameter) at MS.Win32.HwndSubclass.SubclassWndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam) at MS.Win32.UnsafeNativeMethods.DispatchMessage(MSG& msg) at System.Windows.Threading.Dispatcher.TranslateAndDispatchMessage(MSG& msg) at System.Windows.Threading.Dispatcher.PushFrameImpl(DispatcherFrame frame) at System.Windows.Application.RunInternal(Window window) at GreenWebPlayerWPF.App.Main() i C:\Development\Hvarregaard\GWDS\GreenWeb\GreenWebPlayerWPF\obj\Debug\App.g.cs:linje 0 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException: Anyone? Or any way to capture it and respond to it, tried a try/catch around my code, but its not caught - seems something deep inside the guts of the CLR is failing.

    Read the article

  • Opinions on collision detection objects with a moving scene

    - by Evan Teran
    So my question is simple, and I guess it boils down to how anal you want to be about collision detection. To keep things simple, lets assume we're talking about 2D sprites defined by a bounding box. In addition, let's assume that my sprite object has a function to detect collisions like this: S.collidesWith(other); Finally the scene is moving and "walls" in the scene can move, an object may not touch a wall. So a simple implementation might look like this (psuedo code): moveWalls(); moveSprite(); foreach(wall as w) { if(s.collidesWith(w)) { gameover(); } } The problem with this is that if the sprite and wall move towards each other, depending on the circumstances (such as diagonal moment). They may pass though each other (unlikely but could happen). So I may do this instead. moveWalls(); foreach(wall as w) { if(s.collidesWith(w)) { gameover(); } } moveSprite(); foreach(wall as w) { if(s.collidesWith(w)) { gameover(); } } This takes care of the passing through each other issue, but another rare issue comes up. If they are adjacent to each other (literally the next pixel) and both the wall and the sprite are moving left, then I will get an invalid collision since the wall moves, checks for collision (hit) then the sprite is moved. Which seems unfair. In addition, to that, the redundant collision detection feels very inefficient. I could give the player movement priority alleviating the first issue but it is still checking twice. moveSprite(); foreach(wall as w) { if(s.collidesWith(w)) { gameover(); } } moveWalls(); foreach(wall as w) { if(s.collidesWith(w)) { gameover(); } } Am I simply over thinking this issue, should this just be chalked up to "it'll happen rare enough that no one will care"? Certainly looking at old sprite based games, I often find situations where the collision detection has subtle flaws, but I figure by now we can do better :-P. What are people's thoughts?

    Read the article

  • Permanent mutex locking causing deadlock?

    - by Daniel
    I am having a problem with mutexes (pthread_mutex on Linux) where if a thread locks a mutex right again after unlocking it, another thread is not very successful getting a lock. I've attached test code where one mutex is created, along with two threads that in an endless loop lock the mutex, sleep for a while and unlock it again. The output I expect to see is "alive" messages from both threads, one from each (e.g. 121212121212. However what I get is that one threads gets the majority of locks (e.g. 111111222222222111111111 or just 1111111111111...). If I add a usleep(1) after the unlocking, everything works as expected. Apparently when the thread goes to SLEEP the other thread gets its lock - however this is not the way I was expecting it, as the other thread has already called pthread_mutex_lock. I suspect this is the way this is implemented, in that the actice thread has priority, however it causes certain problem in this particular testcase. Is there any way to prevent it (short of adding a deliberately large enough delay or some kind of signaling) or where is my error in understanding? #include <pthread.h> #include <stdio.h> #include <string.h> #include <sys/time.h> #include <unistd.h> pthread_mutex_t mutex; void* threadFunction(void *id) { int count=0; while(true) { pthread_mutex_lock(&mutex); usleep(50*1000); pthread_mutex_unlock(&mutex); // usleep(1); ++count; if (count % 10 == 0) { printf("Thread %d alive\n", *(int*)id); count = 0; } } return 0; } int main() { // create one mutex pthread_mutexattr_t attr; pthread_mutexattr_init(&attr); pthread_mutex_init(&mutex, &attr); // create two threads pthread_t thread1; pthread_t thread2; pthread_attr_t attributes; pthread_attr_init(&attributes); int id1 = 1, id2 = 2; pthread_create(&thread1, &attributes, &threadFunction, &id1); pthread_create(&thread2, &attributes, &threadFunction, &id2); pthread_attr_destroy(&attributes); sleep(1000); return 0; }

    Read the article

  • [C++] std::string manipulation: whitespace, "newline escapes '\'" and comments #

    - by rubenvb
    Kind of looking for affirmation here. I have some hand-written code, which I'm not shy to say I'm proud of, which reads a file, removes leading whitespace, processes newline escapes '\' and removes comments starting with #. It also removes all empty lines (also whitespace-only ones). Any thoughts/recommendations? I could probably replace some std::cout's with std::runtime_errors... but that's not a priority here :) const int RecipeReader::readRecipe() { ifstream is_recipe(s_buffer.c_str()); if (!is_recipe) cout << "unable to open file" << endl; while (getline(is_recipe, s_buffer)) { // whitespace+comment removeLeadingWhitespace(s_buffer); processComment(s_buffer); // newline escapes + append all subsequent lines with '\' processNewlineEscapes(s_buffer, is_recipe); // store the real text line if (!s_buffer.empty()) v_s_recipe.push_back(s_buffer); s_buffer.clear(); } is_recipe.close(); return 0; } void RecipeReader::processNewlineEscapes(string &s_string, ifstream &is_stream) { string s_temp; size_t sz_index = s_string.find_first_of("\\"); while (sz_index <= s_string.length()) { if (getline(is_stream,s_temp)) { removeLeadingWhitespace(s_temp); processComment(s_temp); s_string = s_string.substr(0,sz_index-1) + " " + s_temp; } else cout << "Error: newline escape '\' found at EOF" << endl; sz_index = s_string.find_first_of("\\"); } } void RecipeReader::processComment(string &s_string) { size_t sz_index = s_string.find_first_of("#"); s_string = s_string.substr(0,sz_index); } void RecipeReader::removeLeadingWhitespace(string &s_string) { const size_t sz_length = s_string.size(); size_t sz_index = s_string.find_first_not_of(" \t"); if (sz_index <= sz_length) s_string = s_string.substr(sz_index); else if ((sz_index > sz_length) && (sz_length != 0)) // "empty" lines with only whitespace s_string.clear(); } Some extra info: the first s_buffer passed to the ifstream contains the filename, std::string s_buffer is a class data member, so is std::vector v_s_recipe. Any comment is welcome :)

    Read the article

  • How does your team work together in a remote setup?

    - by Carl Rosenberger
    Hi, we are a distributed team working on the object database db4o. The way we work: We try to program in pairs only. We use Skype and VNC or SharedView to connect and work together. In our online Tuesday meeting every week (usually about 1 hour) we talk about the tasks done last week we create new pairs for the next week with a random generator so knowledge and friendship distribute evenly we set the priority for any new tasks or bugs that have come in each team picks the tasks it likes to do from the highest prioritized ones. From Tuesday to Wednesday we estimate tasks. We have a unit of work we call "Ideal Developer Session" (IDS), maybe 2 or 3 hours of working together as a pair. It's not perfectly well defined (because we know estimation always is inaccurate) but from our past shared experience we have a common sense of what an IDS is. If we can't estimate a task because it feels too long for a week we break it down into estimatable smaller tasks. During a short meeting on Wednesday we commit to a workload we feel is well doable in a week. We commit to complete. If a team runs out of committed tasks during the week, it can pick new ones from the prioritized queue we have in Jira. When we started working this way, some of us found that remote pair programming takes a lot of energy because you are so focussed. If you pair program for more than 5 or 6 hours per day, you get drained. On the other hand working like this has turned out to be very efficient. The knowledge about our codebase is evenly distributed and we have really learnt lots from eachother. I would be very interested to hear about the experiences from other teams working in a similar way. Things like: How often do you meet? Have you tried different sprint lengths (one week, two week, longer) ? Which tools do you use? Which issue tracker do you use? What do you do about time zone differences? How does it work for you to integrate new people into the team? How many hours do you usually work per week? How does your management interact with the way you are working? Do you get put on a waterfall with hard deadlines? What's your unit of work? What is your normal velocity? (units of work done per week) Programming work should be fun and for us it usually is great fun. I would be happy about any new ideas how to make it even more fun and/or more efficient.

    Read the article

  • How to implement a caching model without violating MVC pattern?

    - by RPM1984
    Hi Guys, I have an ASP.NET MVC 3 (Razor) Web Application, with a particular page which is highly database intensive, and user experience is of the upmost priority. Thus, i am introducing caching on this particular page. I'm trying to figure out a way to implement this caching pattern whilst keeping my controller thin, like it currently is without caching: public PartialViewResult GetLocationStuff(SearchPreferences searchPreferences) { var results = _locationService.FindStuffByCriteria(searchPreferences); return PartialView("SearchResults", results); } As you can see, the controller is very thin, as it should be. It doesn't care about how/where it is getting it's info from - that is the job of the service. A couple of notes on the flow of control: Controllers get DI'ed a particular Service, depending on it's area. In this example, this controller get's a LocationService Services call through to an IQueryable<T> Repository and materialize results into T or ICollection<T>. How i want to implement caching: I can't use Output Caching - for a few reasons. First of all, this action method is invoked from the client-side (jQuery/AJAX), via [HttpPost], which according to HTTP standards should not be cached as a request. Secondly, i don't want to cache purely based on the HTTP request arguments - the cache logic is a lot more complicated than that - there is actually two-level caching going on. As i hint to above, i need to use regular data-caching, e.g Cache["somekey"] = someObj;. I don't want to implement a generic caching mechanism where all calls via the service go through the cache first - i only want caching on this particular action method. First thought's would tell me to create another service (which inherits LocationService), and provide the caching workflow there (check cache first, if not there call db, add to cache, return result). That has two problems: The services are basic Class Libraries - no references to anything extra. I would need to add a reference to System.Web here. I would have to access the HTTP Context outside of the web application, which is considered bad practice, not only for testability, but in general - right? I also thought about using the Models folder in the Web Application (which i currently use only for ViewModels), but having a cache service in a models folder just doesn't sound right. So - any ideas? Is there a MVC-specific thing (like Action Filter's, for example) i can use here? General advice/tips would be greatly appreciated.

    Read the article

  • ANSI C as core of a C# project? Is this possible?

    - by Nektarios
    I'm writing a NON-GUI app which I want to be cross platform between OS X and Windows. I'm looking at the following architecture, but I don't know if it will work on the windows side: (Platform specific entry point) - ANSI C main loop = ANSI C model code doing data processing / logic = (Platform specific helpers) So the core stuff I'm planning to write in regular ANSI C, because A) it should be platform independent, B) I'm extremely comfortable with C, C) It can do the job and do it well (Platform specific entry point) can be written in whatever necessary to get the job done, this is a small amount of code, doesn't matter to me. (Platform specific helpers) is the sticky thing. This is stuff like parsing XML, accessing databases, graphics toolkit stuff, whatever. Things that aren't easy in C. Things that modern languages/frameworks will give for free. On OS X this code will be written in Objective-C interfacing with Cocoa. On Windows I'm thinking my best bet is to use C# So on Windows my architecture (simplified) looks like (C# or C?) - ANSI C - C# Is this possible? Some thoughts/suggestions so far.. 1) Compile my C core as a .dll -- this is fine, but seems there's no way to call my C# helpers unless I can somehow get function pointers and pass them to my core, but that seems unlikely 2) Compile a C .exe and a C# .exe and have them talk via shared memory or some kind of IPC. I'm not entirely opposed to this but it obviously introduces a lot of complexity so it doesn't seem ideal 3) Instead of C# use C++, it gets me some nice data management stuff and nice helper code. And I can mix it pretty easily. And the work I do could probably easily port to Linux. But I really don't like C++, and I don't want this to turn in to a 3rd-party-library-fest. Not that it's a huge deal, but it's 2010.. anything for basic data management should be built in. And targetting Linux is really not a priority. Note that no "total" alternatives are OK as suggested in other similar questions on SO I've seen; java, RealBasic, mono.. this is an extremely performance intensive application doing soft realtime for game/simulation purposes, I need C & friends here to do it right (maybe you don't, but I do)

    Read the article

  • Attach file to mail using php

    - by ktsixit
    Hi all, I've created a form which contains an upload field file and some other text fields. I'm using php to send the form's data via email and attach the file. This is the code I'm using but it's not working properly. The file is normally attached to the message but the rest of the data is not sent. $body="bla bla bla"; $attachment = $_FILES['cv']['tmp_name']; $attachment_name = $_FILES['cv']['name']; if (is_uploaded_file($attachment)) { $fp = fopen($attachment, "rb"); $data = fread($fp, filesize($attachment)); $data = chunk_split(base64_encode($data)); fclose($fp); } $headers = "From: $email<$email>\n"; $headers .= "Reply-To: <$email>\n"; $headers .= "MIME-Version: 1.0\n"; $headers .= "Content-Type: multipart/related; type=\"multipart/alternative\"; boundary=\"----=MIME_BOUNDRY_main_message\"\n"; $headers .= "X-Sender: $first_name $family_name<$email>\n"; $headers .= "X-Mailer: PHP4\n"; $headers .= "X-Priority: 3\n"; $headers .= "Return-Path: <$email>\n"; $headers .= "This is a multi-part message in MIME format.\n"; $headers .= "------=MIME_BOUNDRY_main_message \n"; $headers .= "Content-Type: multipart/alternative; boundary=\"----=MIME_BOUNDRY_message_parts\"\n"; $message = "------=MIME_BOUNDRY_message_parts\n"; $message .= "Content-Type: text/html; charset=\"utf-8\"\n"; $message .= "Content-Transfer-Encoding: quoted-printable\n"; $message .= "\n"; $message .= "$body\n"; $message .= "\n"; $message .= "------=MIME_BOUNDRY_message_parts--\n"; $message .= "\n"; $message .= "------=MIME_BOUNDRY_main_message\n"; $message .= "Content-Type: application/octet-stream;\n\tname=\"" . $attachment_name . "\"\n"; $message .= "Content-Transfer-Encoding: base64\n"; $message .= "Content-Disposition: attachment;\n\tfilename=\"" . $attachment_name . "\"\n\n"; $message .= $data; //The base64 encoded message $message .= "\n"; $message .= "------=MIME_BOUNDRY_main_message--\n"; $subject = 'bla bla bla'; $to="[email protected]"; mail($to,$subject,$message,$headers); Why isn't the $body data not sent? Can you help me fix it?

    Read the article

  • Override default javascript functionality with jQuery

    - by deth4uall
    I am trying to overwrite a JavaScript on change event in the below code with jQuery however I believe that the inline JavaScript is taking priority over the jQuery functionality declared. Essentially I am trying to have an AJAX version of my site which includes an additional JavaScript file. I need this functionality to still work without the additional AJAX version, but I am not sure as to whether I should include it in the main JavaScript file or leave it inline like it is right now. Any suggestions and information regarding them would be greatly appreciated! Thanks! <form action="/cityhall/villages/" method="post" name="submit_village"> <select name="village" onchange="document.submit_village.submit();"> <option value=""></option> </select> </form> I am trying to use the jQuery Form Plugin to submit the posts to the PHP to handle the requests as follows: var bank_load_options = { target: '#content', beforeSubmit: showRequest, success: showResponse }; $('form.get_pages').livequery(function(){ $(this).ajaxForm(bank_load_options); return false; }); I modified the code as following: <form action="/cityhall/villages/" method="post" id="submit_village" name="submit_village"> <select name="village" class="get_pages" rel="submit_village"> <option value=""></option> </select> </form> <script> # main JavaScript $('.get_pages').live('change',function(e){ var submit_page = $(this).attr('rel'); $("#"+submit_page).submit(); }); # ajax JavaScript var bank_load_options = { target: '#content', beforeSubmit: showRequest, success: showResponse }; $('.get_pages').live('change',function(){ var submit_page = $(this).attr('rel'); $("#"+submit_page).ajaxForm(get_pages_load_options); return false; }); </script> However now it only runs every other option when I change it.

    Read the article

  • add space to every word's end in a string in C

    - by hlx98007
    Here I have a string: *line = "123 567 890 "; with 2 spaces at the end. I wish to add those 2 spaces to 3's end and 7's end to make it like this: "123 567 890" I was trying to achieve the following steps: parse the string into words by words list (array of strings). From upstream function I will get values of variables word_count, *line and remain. concatenate them with a space at the end. add space distributively, with left to right priority, so when a fair division cannot be done, the second to last word's end will have (no. of spaces) spaces, the previous ones will get (spaces + 1) spaces. concatenate everything together to make it a new *line. Here is a part of my faulty code: int add_space(char *line, int remain, int word_count) { if (remain == 0.0) return 0; // Don't need to operate. int ret; char arr[word_count][line_width]; memset(arr, 0, word_count * line_width * sizeof(char)); char *blank = calloc(line_width, sizeof(char)); if (blank == NULL) { fprintf(stderr, "calloc for arr error!\n"); return -1; } for (int i = 0; i < word_count; i++) { ret = sscanf(line, "%s", arr[i]); // gdb shows somehow it won't read in. if (ret != 1) { fprintf(stderr, "Error occured!\n"); return -1; } arr[i] = strcat(arr[i], " "); // won't compile. } size_t spaces = remain / (word_count * 1.0); memset(blank, ' ', spaces + 1); for (int i = 0; i < word_count - 1; i++) { arr[0] = strcat(arr[i], blank); // won't compile. } memset(blank, ' ', spaces); arr[word_count-1] = strcat(arr[word_count-1], blank); for (int i = 1; i < word_count; i++) { arr[0] = strcat(arr[0], arr[i]); } free(blank); return 0; } It is not working, could you help me find the parts that do not work and fix them please? Thank you guys.

    Read the article

  • [C++] std::tring manipulation: whitespace, "newline escapes '\'" and comments #

    - by rubenvb
    Kind of looking for affirmation here. I have some hand-written code, which I'm not shy to say I'm proud of, which reads a file, removes leading whitespace, processes newline escapes '\' and removes comments starting with #. It also removes all empty lines (also whitespace-only ones). Any thoughts/recommendations? I could probably replace some std::cout's with std::runtime_errors... but that's not a priority here :) const int RecipeReader::readRecipe() { ifstream is_recipe(s_buffer.c_str()); if (!is_recipe) cout << "unable to open file" << endl; while (getline(is_recipe, s_buffer)) { // whitespace+comment removeLeadingWhitespace(s_buffer); processComment(s_buffer); // newline escapes + append all subsequent lines with '\' processNewlineEscapes(s_buffer, is_recipe); // store the real text line if (!s_buffer.empty()) v_s_recipe.push_back(s_buffer); s_buffer.clear(); } is_recipe.close(); return 0; } void RecipeReader::processNewlineEscapes(string &s_string, ifstream &is_stream) { string s_temp; size_t sz_index = s_string.find_first_of("\\"); while (sz_index <= s_string.length()) { if (getline(is_stream,s_temp)) { removeLeadingWhitespace(s_temp); processComment(s_temp); s_string = s_string.substr(0,sz_index-1) + " " + s_temp; } else cout << "Error: newline escape '\' found at EOF" << endl; sz_index = s_string.find_first_of("\\"); } } void RecipeReader::processComment(string &s_string) { size_t sz_index = s_string.find_first_of("#"); s_string = s_string.substr(0,sz_index); } void RecipeReader::removeLeadingWhitespace(string &s_string) { const size_t sz_length = s_string.size(); size_t sz_index = s_string.find_first_not_of(" \t"); if (sz_index <= sz_length) s_string = s_string.substr(sz_index); else if ((sz_index > sz_length) && (sz_length != 0)) // "empty" lines with only whitespace s_string.clear(); } Some extra info: std::string s_buffer is a class data member, so is std::vector v_s_recipe. Any comment is welcome :)

    Read the article

  • AudioTrack lag: obtainBuffer timed out

    - by BTR
    I'm playing WAVs on my Android phone by loading the file and feeding the bytes into AudioTrack.write() via the FileInputStream BufferedInputStream DataInputStream method. The audio plays fine and when it is, I can easily adjust sample rate, volume, etc on the fly with nice performance. However, it's taking about two full seconds for a track to start playing. I know AudioTrack has an inescapable delay, but this is ridiculous. Every time I play a track, I get this: 03-13 14:55:57.100: WARN/AudioTrack(3454): obtainBuffer timed out (is the CPU pegged?) 0x2e9348 user=00000960, server=00000000 03-13 14:55:57.340: WARN/AudioFlinger(72): write blocked for 233 msecs, 9 delayed writes, thread 0xba28 I've noticed that the delayed write count increases by one every time I play a track -- even across multiple sessions -- from the time the phone has been turned on. The block time is always 230 - 240ms, which makes sense considering a minimum buffer size of 9600 on this device (9600 / 44100). I've seen this message in countless searches on the Internet, but it usually seems to be related to not playing audio at all or skipping audio. In my case, it's just a delayed start. I'm running all my code in a high priority thread. Here's a truncated-yet-functional version of what I'm doing. This is the thread callback in my playback class. Again, this works (only playing 16-bit, 44.1kHz, stereo files right now), it just takes forever to start and has that obtainBuffer/delayed write message every time. public void run() { // Load file FileInputStream mFileInputStream; try { // mFile is instance of custom file class -- this is correct, // so don't sweat this line mFileInputStream = new FileInputStream(mFile.path()); } catch (FileNotFoundException e) {} BufferedInputStream mBufferedInputStream = new BufferedInputStream(mFileInputStream, mBufferLength); DataInputStream mDataInputStream = new DataInputStream(mBufferedInputStream); // Skip header try { if (mDataInputStream.available() > 44) mDataInputStream.skipBytes(44); } catch (IOException e) {} // Initialize device mAudioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, ConfigManager.SAMPLE_RATE, AudioFormat.CHANNEL_CONFIGURATION_STEREO, AudioFormat.ENCODING_PCM_16BIT, ConfigManager.AUDIO_BUFFER_LENGTH, AudioTrack.MODE_STREAM); mAudioTrack.play(); // Initialize buffer byte[] mByteArray = new byte[mBufferLength]; int mBytesToWrite = 0; int mBytesWritten = 0; // Loop to keep thread running while (mRun) { // This flag is turned on when the user presses "play" while (mPlaying) { try { // Check if data is available if (mDataInputStream.available() > 0) { // Read data from file and write to audio device mBytesToWrite = mDataInputStream.read(mByteArray, 0, mBufferLength); mBytesWritten += mAudioTrack.write(mByteArray, 0, mBytesToWrite); } } catch (IOException e) { } } } } If I can get past the artificially long lag, I can easily deal with the inherit latency by starting my write at a later, predictable position (ie, skip past the minimum buffer length when I start playing a file).

    Read the article

  • Can't logging in file from tomcat6 with log4j

    - by Ivan Nakov
    I have one stupid problem, which is killing me from hours. I'm trying to configure loggin to my project. I started with a simple Spring MVC project generated by STS, then added org.apache.log4j.RollingFileAppender to the existing log4j.xml file. <?xml version="1.0" encoding="UTF-8"?> <!-- Appenders --> <appender name="console" class="org.apache.log4j.ConsoleAppender"> <param name="Target" value="System.out" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%-5p: %c - %m%n" /> </layout> </appender> <appender name="FilleAppender" class="org.apache.log4j.RollingFileAppender"> <param name="maxFileSize" value="100KB" /> <param name="maxBackupIndex" value="2" /> <param name="File" value="/home/ivan/Desktop/app.log" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d{ABSOLUTE} %5p %c{1}: %m%n " /> </layout> </appender> <!-- Application Loggers --> <logger name="org.elsys.logger"> <level value="debug" /> </logger> <!-- 3rdparty Loggers --> <logger name="org.springframework.core"> <level value="info" /> </logger> <logger name="org.springframework.beans"> <level value="info" /> </logger> <logger name="org.springframework.context"> <level value="info" /> </logger> <logger name="org.springframework.web"> <level value="info" /> </logger> <!-- Root Logger --> <root> <priority value="debug" /> <appender-ref ref="FilleAppender" /> </root> When I deploy project to tomcat6 server and open the url, logger doesn't generate log file. I'm trying to log from this controller: @Controller public class HomeController { private static final Logger logger = LoggerFactory.getLogger(HomeController.class); /** * Simply selects the home view to render by returning its name. */ @RequestMapping(value = "/", method = RequestMethod.GET) public String home(Locale locale, Model model) { logger.info("Welcome home! the client locale is "+ locale.toString()); Date date = new Date(); DateFormat dateFormat = DateFormat.getDateTimeInstance(DateFormat.LONG, DateFormat.LONG, locale); String formattedDate = dateFormat.format(date); logger.debug("send view"); model.addAttribute("serverTime", formattedDate ); return "home"; } } When I log from this simple Main.class, it works correct. public class Main { public static void main(String[] args) { Logger log = LoggerFactory.getLogger(Main.class); log.debug("Test"); } } I'm using tomcat6 and Ubuntu 11.10. I made a research in net and i found various options to fix this problem, but they don't help me. Please if someone have ideas how to fix it, help me.

    Read the article

  • Broken cups installation on a ubuntu server 64

    - by user67046
    Hi, I am having trouble with an cups installation. It seems to be in a broken state. When i try to reinstall it it stalls, the same if i try to remove it completely. I am running the server version 64 bit of Ubuntu 10.10 with kernel Linux version 2.6.35-22-server. When i try to start the cups daemon with the following command sudo service cups start It just stays there and nothing happens. I have tried to remove it, to be able to reinstall it, with the following command sudo apt-get purge cups It finally stalls with the following message Removing cups ... After that nothing happens. The process tree for the apt-get command looks like this. 1404 1404 1404 ? 00:00:00 sshd 26495 26495 26495 ? 00:00:00 sshd 26581 26495 26495 ? 00:00:00 sshd 26582 26582 26582 pts/4 00:00:00 bash 27158 27158 26582 pts/4 00:00:00 apt-get 27172 27172 27172 pts/2 00:00:00 dpkg 27176 27172 27172 pts/2 00:00:00 cups.prerm 27178 27172 27172 pts/2 00:00:00 stop I have tried to leave the process running for a while to see if i get any error messages but without success. To get out of it I have to kill the processes. sudo dpkg --configure cups dpkg: error processing cups (--configure): package cups is already installed and configured Errors were encountered while processing: cups sudo dpkg --status cups Package: cups Status: purge ok installed Priority: optional Section: net Installed-Size: 8292 Maintainer: Ubuntu Developers <[email protected]> Architecture: amd64 Version: 1.4.4-6ubuntu2.3 Replaces: cupsddk-drivers (<< 1.4.0) Provides: cupsddk-drivers Depends: libavahi-client3 (>= 0.6.16), libavahi-common3 (>= 0.6.16), libc6 (>= 2.7), libcups2 (>= 1.4.4-3~), libcupscgi1 (>= 1.4.2), libcupsdriver1 (>= 1.4.0), libcupsimage2 (>= 1.4.0), libcupsmime1 (>= 1.4.0), libcupsppdc1 (>= 1.4.0), libdbus-1-3 (>= 1.0.2), libgcc1 (>= 1:4.1.1), libgnutls26 (>= 2.7.14-0), libgssapi-krb5-2 (>= 1.8+dfsg), libijs-0.35, libkrb5-3 (>= 1.6.dfsg.2), libldap-2.4-2 (>= 2.4.7), libpam0g (>= 0.99.7.1), libpaper1, libpoppler7, libslp1, libstdc++6 (>= 4.1.1), libusb-0.1-4 (>= 2:0.1.12), zlib1g (>= 1:1.1.4), debconf (>= 1.2.9) | debconf-2.0, upstart-job, poppler-utils (>= 0.12), procps, ghostscript, lsb-base (>= 3), cups-common (>= 1.4.4), cups-client (>= 1.4.4-6ubuntu2.3), ssl-cert (>= 1.0.11), adduser, bc, ttf-freefont, cups-ppdc Recommends: foomatic-filters (>= 4.0), cups-driver-gutenprint, ghostscript-cups Suggests: cups-bsd, foomatic-db-compressed-ppds | foomatic-db, hplip, xpdf-korean | xpdf-japanese | xpdf-chinese-traditional | xpdf-chinese-simplified, cups-pdf, smbclient (>= 3.0.9), udev Breaks: foomatic-filters (<< 4.0) Conflicts: cupsddk-drivers (<< 1.4.0) Conffiles: /etc/fonts/conf.d/99pdftoopvp.conf a5221cfad70a981c80864229ef56586d /etc/logrotate.d/cups 5bb41fa9900f0d1c565954405a2bd7c4 /etc/default/cups 2b436fbb1a32b82b6aba45a76a1d7e40 /etc/pam.d/cups ff2488324854f7b1e892bb0df062d5f0 /etc/init/cups.conf 1a3cd022e8474e3d2b44640f33ce68e3 /etc/ufw/applications.d/cups 29e98a6d850da251e180c3d68dec2bd3 /etc/apparmor.d/usr.sbin.cupsd 60c4b26bfd5c033baa3dd48a3b2e9911 /etc/cups/cupsd.conf e2c7ec15835ea0939e5e86f7c6efcc03 /etc/cups/snmp.conf 2326a8af1e112676d55245bc5eb459ca /etc/cups/cupsd.conf.default a68d54d76021e857dd1d64edf57d36c5 Description: Common UNIX Printing System(tm) - server The Common UNIX Printing System (or CUPS(tm)) is a printing system and general replacement for lpd and the like. It supports the Internet Printing Protocol (IPP), and has its own filtering driver model for handling various document types. . This package provides the CUPS scheduler/daemon and related files. Original-Maintainer: Debian CUPS Maintainers <[email protected]> Would be greatful if someone could provide some help on how to solve this issue.

    Read the article

  • iptables DNS resolution

    - by Favolas
    I have a virtual machine with Fedora 19 acting as a router. This machine as an interface (p8p1) with the IP 172.16.1.254 that is connected to another machine (IP 172.16.1.1) that's simulating the external network. I've installed snort 2.9.2.2, applied the snortsam-2.9.2.2.diff.gz patch and installed snortsam 2.70 on the routermachine In snort.conf besides altering some RULE_PATH I believe I've only added the following line to the file. output alert_fwsam: 127.0.0.1:898/password After doing this two comands: ifconfig p8p1 promisc /usr/local/snort/bin/snort -v -i p8p1 If I ping from the external network to the router IP, I can see the info about the pings. One of the rules that I have is icmp-info.rules that as this single line: alert icmp $EXTERNAL_NET any -> $HOME_NET any (msg:"ICMP-INFO Echo Reply"; icode:0; itype:0; classtype:misc-activity; sid:408; rev:6;fwsam: src, 5 minutes;) snortsam.conf as this data: defaultkey password accept localhost keyinterval 30 minutes dontblock 192.168.1.1 # rede local rollbackhosts 50 rollbackthreshold 20 / 30 secs rollbacksleeptime 1 minute logfile /var/log/snort/snortsam.log loglevel 3 daemon nothreads # linha importante para gerar os bloqueios via iptables iptables p8p1 LOG bindip 127.0.0.1 Now I run this command: /usr/local/snort/bin/snort -u snort -i p8p1 -c /etc/snort/snort.conf -l /var/log/snort -Dq Terminal gives this message: Spawning daemon child... My daemon child 2080 lives... Daemon parent exiting (0) and when I runsnortsam in terminal i got this: SnortSam, v 2.70. Copyright (c) 2001-2009 Frank Knobbe . All rights reserved. Plugin 'fwsam': v 2.5, by Frank Knobbe Plugin 'fwexec': v 2.7, by Frank Knobbe Plugin 'pix': v 2.9, by Frank Knobbe Plugin 'ciscoacl': v 2.12, by Ali Basel <[email protected]> Plugin 'cisconullroute': v 2.5, by Frank Knobbe Plugin 'cisconullroute2': v 2.2, by Wouter de Jong <[email protected]> Plugin 'netscreen': v 2.10, by Frank Knobbe Plugin 'ipchains': v 2.8, by Hector A. Paterno <[email protected]> Plugin 'iptables': v 2.9, by Fabrizio Tivano <[email protected]>, Luis Marichal <[email protected]> Plugin 'ebtables': v 2.4, by Bruno Scatolin <[email protected]> Plugin 'watchguard': v 2.7, by Thomas Maier <[email protected]> Plugin 'email': v 2.12, by Frank Knobbe Plugin 'email-blocks-only': v 2.12, by Frank Knobbe Plugin 'snmpinterfacedown': v 2.3, by Ali BASEL <[email protected]> Plugin 'forward': v 2.8, by Frank Knobbe Parsing config file /etc/snortsam.conf... Linking plugin 'iptables'... Checking for existing state file "/var/db/snortsam.state". Found. Reading state file. Starting to listen for Snort alerts. and snortsam.log as an entry like this 2013/10/25, 10:15:17, -, 1, snortsam, Starting to listen for Snort alerts. Now, from the external machine I do ping 172.16.1.254 and it starts showing the info and an alert file is created in /var/log/snort/ that as the info about the PINGS. Something like: [**] [1:408:6] ICMP-INFO Echo Reply [**] [Classification: Misc activity] [Priority: 3] 10/25-10:35:16.061319 172.16.1.254 -> 172.16.1.1 ICMP TTL:64 TOS:0x0 ID:38720 IpLen:20 DgmLen:84 Type:0 Code:0 ID:1389 Seq:1 ECHO REPLY Also, if I run instead /usr/local/snort/bin/snort snort -v -i p8p1 i got this message: Running in packet dump mode --== Initializing Snort ==-- Initializing Output Plugins! Snort BPF option: snort pcap DAQ configured to passive. The DAQ version does not support reload. Acquiring network traffic from "p8p1". ERROR: Can't set DAQ BPF filter to 'snort' (pcap_daq_set_filter: pcap_compile: syntax error)! Fatal Error, Quitting.. So, this are my questions: Shouldn't snortsam block the PING? Is that DAQ error causing the problem? If so, How can I solve it?

    Read the article

  • DHCP and DNS services configuration for VOIP system, windows domain, etc

    - by Stemen
    My company has numerous physical offices (for purposes of this discussion, 15 buildings). Some of them are well-connected to our primary data center via fiber. Others will be connected to the data center by P2P T1. We are in the beginning stages of implementing an Avaya VOIP telephone system, and we will be replacing a significant portion of our network infrastructure in the process. In tandem with the phone system implementation, we are going to be re-addressing some of our networks, and consolidating most of our Windows domains into one (not all domains, just most). We currently have quite a few Windows domains, and they of course each have their own DNS zones. A few of those networks currently use DHCP, but the majority use static IP assignments for every device. I'm tired of managing static assignments -- I want to use DHCP configuration on everything except servers. Printers and etc will have DHCP reservations. The new IP phones will need to get IP addresses from DHCP, though they need to be in a separate VLAN from the computers/printers/etc. The computers and printers need to be registered in DNS. That's currently handled by the Windows DHCP servers on each of the respective domains. We need to place a priority on DHCP and DNS being available on a per-site basis (in case something were to interrupt the WAN connection) for computers and (primarily) phones. Smaller locations (which will have IP phones but not be a member of any Windows domain) will not have any Windows DNS/DHCP server(s) available. We also are looking for the easiest way to replace a part if it were to fail. That is to say, if a server/appliance/router hosting DHCP were to crash hard, and we couldn't extremely quickly recover the DHCP reservations and leases (and subsequently restore them onto a cold spare), we anticipate that bad things could happen. What is the best idea for how to re-implement DNS and DHCP keeping all of the above in mind? Some thoughts that have been raised (by myself or my coworkers): Use Windows DNS and DHCP servers, where they exist, and use IP helpers to route DHCP requests to some other Windows server if necessary. May not be acceptable if the WAN goes down and clients don't get a DHCP response. Use Windows DNS (everywhere, over WAN in some cases) and a mix of Windows DHCP and DHCP provided by Cisco routers. Every site would be covered for DHCP, but from what I've read, Cisco routers can't handle dynamic registration of DHCP clients to Windows DNS servers, which might create a problem where Cisco routers are used for DHCP. Use Windows DNS (everywhere, over WAN in some cases) and a mix of Windows DHCP and DHCP provided by some service running on an extremely low-price linux server. Is there any such software that would allow DHCP leases granted by these linux boxes to be dynamically registered on the Windows DNS servers? Come up with a Linux solution for both DNS and DHCP, and deploy low-price linux servers to every site. Requirements would be that the DNS zone be multi-master (like Windows DNS integrated with Active Directory), that DHCP be able to make dynamic DNS registrations in that zone, for every lease (where a hostname is provided and is thus possible), and that multiple servers be either authoritative for the same DHCP scope or at least receiving a real-time copy / replication / sync of the leases table so that if one server dies, we still know which MAC has what address. Purchase dedicated DNS/DHCP appliances, deploying to all sites. From what I read/see, this solves all of our technical problems. Then come the financial problems... I don't have a ton of money to spend on this. Or, some other solution that we've thus far overlooked and will consider upon recommendation. Can Cisco routers or Windows servers sync DHCP lease tables so that multiple servers can be authoritative (or active/passive for all I care) for the same scope, in case one of the partners were to fail? I've read online (repeatedly) that ISC's DHCP is able to maintain the same lease table across multiple servers, in order to solve this problem. Does anyone have any experience or advice to regarding that?

    Read the article

  • RAID and Partitions, guidance Needed

    - by beauregarde
    Alright I have a Biostar TA790GX3A2+ Mobo 2x Seagate 750Gb Hard drive (with 2 different speeds) an X4 9750 A GeForce 9800GT and 2GB RAM Hardware Specs link text I want to configure my computer with partitions in various RAID arrays. The Partitions I know i want (disk letters are mostly for reference here) C: XP Boot D: XP Swap E: XP Run F: Games G: Data The Partitions I think I want (repeat caveat) H: small FAT for Win Legacy and DOS I: Linux J: Linux Swap K-?M?: Other Linux /whatever partitions N & O: Attic for D1 and D2 What I'd like to do, is have C: written on Disk 1 (D1),.. D: on D2,.. E: and F: striped on D1 & D2,.. G: mirrored or D1 & D2,.. I: on D2 (so i can just switch disc boot priority to open in Ubuntu),.. J: on D1,.. and H: somewhere low on D1 I am inexperienced with VMs, so i am unsure as to whether those run out of XP, or whether i need to reserve a primary partition for them. However, I think they would be preferable for testing new OS's to scheduling a partition for the same purpose. I'm also not married to XP, but -64 IS pretty important to me. QUestion Time 1) Ignoring the irrationality of it all, is such a configuration possible? If not, can some pseudo-approximation be achieved? 2) My RAID is software, isnt it? 3) How much should I short a 750GB HD? And should i use that space for my attics, or for my attics and something else, or for something else (.iso's perhaps?)? 4) if XP is striped on D1 & D2, will that interfere egregiously with my Swap writes on D2? If so, would striping both XP and Swap relieve (or at least mitigate) that issue? Should XP and Swap just be written normally on 2 different HDs? 5) Should I keep DL's and Drivers on E: (XP Run), F: (Games), or elsewhere? 6) Is 4GB enough for C:? 7) Is 30GB enough (or too much) for E:? 8) How much to reserve for the Linux and sub-Linux partitions? Also, where on the platter do you think i should put them? 9) Am I a fool to use FAT16 instead of FAT32 for H: because I'd rather run 95 than 98SE? If not, do you think 2GB or 4GB? 10) I cant predict what my Max Commit Charge will be, so recommendations for Pagefile size? 5GB? 12GB? 11) VMs, where do I run them? do they exacerbate anything? Would it be better to just emulate Linux, 95, and DOS? EC) What havent I considered that I really should? Notes: computer is mostly for playing games and watching media, though I wouldnt rule out the use of particularly blah-intensive anything.

    Read the article

  • Where's the Swap File/Partition?

    - by chrisbunney
    I'm investigating the virtual memory configuration of a Debian based Amazon EC2 instance, and as my background isn't in system admin, I'm slightly confused by what I'm seeing. We're using MongoDB, and the monitoring server we have indicates that the Mongo process is using about 20GB of swap space, however I can't figure out where this is located on the server. As far as I can tell from using the various suggested methods from Google, there is either a much smaller amount, or none at all. top indicates that there is 1.8GB of swap memory: top - 15:35:21 up 6 days, 3:23, 1 user, load average: 1.60, 1.43, 1.37 Tasks: 47 total, 2 running, 45 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 1.3%sy, 0.0%ni, 14.7%id, 83.8%wa, 0.0%hi, 0.0%si, 0.1%st Mem: 3928924k total, 2855572k used, 1073352k free, 640564k buffers Swap: 0k total, 0k used, 0k free, 1887788k cached swapon -s doesn't seem to think there's any swap space: Filename Type Size Used Priority free -m doesn't think there's any swap either: total used free shared buffers cached Mem: 3836 3663 172 0 626 2701 -/+ buffers/cache: 336 3500 Swap: 0 0 0 And neither does vmstat: procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 3 0 66224 641372 2874744 0 0 21 5012 21 33 2 2 76 19 But cat /etc/fstab thinks there is a swap partition: /dev/xvda1 / ext3 defaults 1 1 /dev/xvda2 /mnt ext3 defaults 0 0 /dev/xvda3 swap swap defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 However df -k gives no indication of the xvda3 partition: Filesystem 1K-blocks Used Available Use% Mounted on /dev/xvda1 16513960 15675324 0 100% / tmpfs 1964460 8 1964452 1% /lib/init/rw udev 1914148 28 1914120 1% /dev tmpfs 1964460 4 1964456 1% /dev/shm So I really don't know what to make of this, because I appear to have a process using about 10 times more virtual memory than what might be available, and I have no idea where this virtual memory is on the system. I'm probably misinterpreting the output of the tools, so I'd be grateful if someone would be able to set me straight: What have I got wrong, what's the right interpretation, and how do you reach that interpretation? EDIT0: We use 10gen's MMS for monitoring the database, the relevant section for memory from the last data point is: "mem": { "virtual": 20749, "bits": 64, "supported": true, "mappedWithJournal": 20376, "mapped": 10188, "resident": 1219 }, This JSON is specific to the database process (I believe) rather than the system as a whole. fdisk -l /dev/xvda outputs... nothing? I tried each of the 3 xvda entries in /etc/fstab as well: root@ip:~# fdisk -l /dev/xvda1 Disk /dev/xvda1: 34.4 GB, 34359738368 bytes 255 heads, 63 sectors/track, 4177 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/xvda1 doesn't contain a valid partition table root@ip:~# fdisk -l /dev/xvda2 root@ip:~# fdisk -l /dev/xvda3 root@ip:~# Edit1: Output of cat /proc/meminfo for the sake of completeness: MemTotal: 3928924 kB MemFree: 726600 kB Buffers: 648368 kB Cached: 2216556 kB SwapCached: 0 kB Active: 1945100 kB Inactive: 994016 kB Active(anon): 60476 kB Inactive(anon): 12952 kB Active(file): 1884624 kB Inactive(file): 981064 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 387180 kB Writeback: 0 kB AnonPages: 73380 kB Mapped: 1188260 kB Shmem: 48 kB Slab: 149768 kB SReclaimable: 146076 kB SUnreclaim: 3692 kB KernelStack: 1104 kB PageTables: 16096 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 1964460 kB Committed_AS: 305572 kB VmallocTotal: 34359738367 kB VmallocUsed: 16760 kB VmallocChunk: 34359721448 kB HardwareCorrupted: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 3932160 kB DirectMap2M: 0 kB

    Read the article

  • Why wireless adatper stop to work?

    - by AndreaNobili
    today I correctly installed the driver for the TP-LINK TL-WN725N USB wireless adapter on my RaspBerry Pi (I use RaspBian that is a Debian), then I setted up the wifi using the wpa-supplicant as explained in this tutorial: http://www.maketecheasier.com/setup-wifi-on-raspberry-pi/ This worked fine untill this evening. Then suddenly it stopped to work when I try to connect in SSH and the Raspberry is on the wireless (or rather it should be, as this is not in the list of my router's DHCP connected Client) The strange thing is that the USB wirless adapter blink so I think that this is not a driver problem. If I try to connect it by the ethernet I have no problem. It appear in my router's DHCP connected Client and I can connect to it by SSH. When I connect to it using ethernet if I perform an ifconfig command I obtain: pi@raspberrypi ~ $ ifconfig eth0 Link encap:Ethernet HWaddr b8:27:eb:2a:9f:b0 inet addr:192.168.1.9 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:48 errors:0 dropped:0 overruns:0 frame:0 TX packets:59 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:6006 (5.8 KiB) TX bytes:8268 (8.0 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1104 (1.0 KiB) TX bytes:1104 (1.0 KiB) wlan0 Link encap:Ethernet HWaddr e8:94:f6:19:80:4c UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) So it seems that the wlan0 USB wireless adapter driver is correctly loaded. If I remove the USB wireless adapter and put it again into the USB port, the lasts lines of dmesg log is: [ 20.303172] smsc95xx 1-1.1:1.0 eth0: hardware isn't capable of remote wakeup [ 20.306340] RTL871X: set bssid:00:00:00:00:00:00 [ 20.306726] RTL871X: set ssid [g\xffffffc6isQ\xffffffffJ\xffffffec)\xffffffcd\xffffffba\xffffffba\xffffffab\xfffffff2\xfffffffb\xffffffe3F|\xffffffc2T\xfffffff8\x1b\xffffffe8\xffffffe7\xffffff8dvZ.c3\xffffff9f\xffffffc9\xffffff9a\xffffff9aD\xffffffa7\x1a\xffffffa0\x1a\xffffff8b] fw_state=0x00000008 [ 21.614585] RTL871X: indicate disassoc [ 21.908495] smsc95xx 1-1.1:1.0 eth0: link up, 100Mbps, full-duplex, lpa 0x45E1 [ 25.006282] Adding 102396k swap on /var/swap. Priority:-1 extents:1 across:102396k SSFS [ 26.247997] RTL871X: nolinked power save enter As you can see some of these line are related to the RTL871X that is my USB wireless adapter, but I don't know is that these line report an error or if it is all ok. Looking at the adapter status I obtain: pi@raspberrypi ~ $ ip link list dev wlan0 3: wlan0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DORMANT qlen 1000 link/ether e8:94:f6:19:80:4c brd ff:ff:ff:ff:ff:ff As you can see the mode is DORMANT but I think that this is normal because now I am connected using ethernet. I tryied to set up the adapter but it seems that I obtain no result, infact: pi@raspberrypi ~ $ sudo ip link set dev wlan0 up pi@raspberrypi ~ $ ip link list dev wlan0 3: wlan0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DORMANT qlen 1000 link/ether e8:94:f6:19:80:4c brd ff:ff:ff:ff:ff:ff pi@raspberrypi ~ $ sudo ip link set dev wlan0 up This is my /etc/network/interfaces file content and it is ok: auto lo iface lo inet loopback iface eth0 inet dhcp allow-hotplug wlan0 iface wlan0 inet manual wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf iface default inet dhcp and it is the /etc/wpa_supplicant/wpa_supplicant.conf that I think is ok (I did not change it compared to when it worked): ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev update_config=1 network={ ssid="MY-NETWORK" psk="mypassword" key_mgmt=WPA-PSK } and infact if I execute a network scan I correctly find MY-NETWORK in the network list,infact: pi@raspberrypi ~ $ sudo iwlist wlan0 scan | grep ESSID ESSID:"TeleTu_74888B0060AD" ESSID:"MY-NETWORK" ESSID:"FASTWEB-1-PT6NtjL4TOSe" ESSID:"DC" So I reboot the system and I remove the ethernet cable but when I try to connect again to my raspberry I obatin the following error message: andrea@andrea-virtual-machine:~$ sudo ssh [email protected] ssh: connect to host 192.168.1.9 port 22: No route to host It seems that it can't connect using wireless. What could be the problem? What am I missing? How can I solve this situation? Tnx

    Read the article

  • Magento hosting on a budget

    - by spa
    I have to do a setup for Magento. My constraint is primarily ease of setup and fault tolerance/fail over. Furthermore costs are an issue. I have three identical physical servers to get the job done. Each server node has an i7 quad core, 16GB RAM, and 2x3TB HD in a software RAID 1 configuration. Each node runs Ubuntu 12.04. right now. I have an additional IP address which can be routed to any of these nodes. The Magento shop has max. 1000 products, 50% of it are bundle products. I would estimate that max. 100 users are active at once. This leads me to the conclusion, that performance is not top priority here. My first setup idea One node (lb) runs nginx as a load balancer. The additional IP is used with domain name and routed to this node by default. Nginx distributes the load equally to the other two nodes (shop1, shop2). Shop1 and shop2 are configured equally: each server runs Apache2 and MySQL. The Mysqls are configured with master/slave replication. My failover strategy: Lb fails = Route IP to shop1 (MySQL master), continue. Shop1 fails = Lb will handle that automatically, promote MySQL slave on shop2 to master, reconfigure Magento to use shop2 for writes, continue. Shop2 fails = Lb will handle that automatically, continue. Is this a sane strategy? Has anyone done a similar setup with Magento? My second setup idea Another way to do it would be to use drbd for storing the MySQL data files on shop1 and shop2. I understand that in this scenario only one node/MySQL instance can be active and the other is used as hot standby. So in case shop1 fails, I would start up MySQL on shop2, route the IP to shop2, and continue. I like that as the MySQL setup is easier and the nodes can be configured 99% identical. So in this case the load balancer becomes useless and I have a spare server. My third setup idea The third way might be master-master replication of MySQL databases. However, in my optinion this might be tricky, as Magento isn't build for this scenario (e.g. conflicting ids for new rows). I would not do that until I have heard of a working example. Could you give me an advice which route to follow? There seems not one "good" way to do it. E.g. I read blog posts which describe a MySQL master/slave setup for Magento, but elsewhere I read, that data might get duplicated when the slave lags behind the master (e.g. when an order is placed, a customer might get created twice). I'm kind of lost here.

    Read the article

  • Why do we need different CPU architecture for server & mini/mainframe & mixed-core?

    - by claws
    Hello, I was just wondering what other CPU architectures are available other than INTEL & AMD. So, found List of CPU architectures on Wikipedia. It categorizes notable CPU architectures into following categories. Embedded CPU architectures Microcomputer CPU architectures Workstation/Server CPU architectures Mini/Mainframe CPU architectures Mixed core CPU architectures I was analyzing the purposes and have few doubts. I taking Microcomputer CPU (PC) architecture as reference and comparing others. Embedded CPU architecture: They are a completely new world. Embedded systems are small & do very specific task mostly real time & low power consuming so we do not need so many & such wide registers available in a microcomputer CPU (typical PC). In other words we do need a new small & tiny architecture. Hence new architecture & new instruction RISC. The above point also clarifies why do we need a separate operating system (RTOS). Workstation/Server CPU architectures I don't know what is a workstation. Someone clarify regarding the workstation. As of the server. It is dedicated to run a specific software (server software like httpd, mysql etc.). Even if other processes run we need to give server process priority therefore there is a need for new scheduling scheme and thus we need operating system different than general purpose one. If you have any more points for the need of server OS please mention. But I don't get why do we need a new CPU Architecture. Why cant Microcomputer CPU architecture do the job. Can someone please clarify? Mini/Mainframe CPU architectures Again I don't know what are these & what miniframes or mainframes used for? I just know they are very big and occupy complete floor. But I never read about some real world problems they are trying to solve. If any one working on one of these. Share your knowledge. Can some one clarify its purpose & why is it that microcomputer CPU archicture not suitable for it? Is there a new kind of operating system for this too? Why? Mixed core CPU architectures Never heard of these. If possible please keep your answer in this format: XYZ CPU architectures Purpose of XYZ Need for a new architecture. why can't current microcomputer CPU architecture work? They go upto 3GHZ & have upto 8 cores. Need for a new Operating System Why do we need a new kind of operating system for this kind of archictures?

    Read the article

  • How to stop Apache from crashing my entire server?

    - by CyberShadow
    I maintain a Gentoo server with a few services, including Apache. It's fairly low-end (2GB of RAM and a low-end CPU with 2 cores). My problem is that, despite my best efforts, an over-loaded Apache crashes the entire server. In fact, at this point I'm close to being convinced that Linux is a horrible operating system that isn't worth anyone's time looking for stability under load. Things I tried: Adjusting oom_adj for the root Apache process (and thus all its children). That had close to no effect. When Apache was overloaded it would bring the system to a grind, as the system paged out everything else before it got to kill anything. Turning off swap. Didn't help, it would unload memory paged to binaries of processes and other files on /, thus causing the same effect. Putting it in a memory-limited cgroup (limited to 512 MB of RAM, 1/4th of the total). This "worked", at least in my own stress tests - except the server keeps crashing under load (basically stalling all other processes, inaccessible via SSH, etc.) Running it with idle I/O priority. This wasn't a very good idea in the end, because it just caused the system load to climb indefinitely (into the thousands) with almost no visible effect - until you tried to access an unbuffered part of the disk. This caused the task to freeze. (So much for good I/O scheduling, eh?) Limiting the number of concurrent connections to Apache. Setting the number too low caused web sites to become unresponsive due to most slots being occupied with long requests (file downloads). I tried various Apache MPMs without much success (prefork, event, itk). Switching from prefork/event+php-cgi+suphp to itk+mod_php. This improved performance, but didn't solve the actual problem. Switching I/O schedulers (cfq to deadline). Just to stress this out: I don't care if Apache itself goes down under load, I just want the rest of my system to remain stable. Of course, having Apache recover quickly after a brief period of intensive load would be great to have, but one step at a time. Right now I am mostly dumbfounded by how can humanity, in this day and age, design an operating system where such a seemingly simple task (don't allow one system component to crash the entire system) seems practically impossible - or at least, very hard to do. Please don't suggest things like VMs or "BUY MORE RAM". Some more information gathered with a friend's help: The processes hang when the cgroup oom killer is invoked. Here's the call trace: [<ffffffff8104b94b>] ? prepare_to_wait+0x70/0x7b [<ffffffff810a9c73>] mem_cgroup_handle_oom+0xdf/0x180 [<ffffffff810a9559>] ? memcg_oom_wake_function+0x0/0x6d [<ffffffff810aa041>] __mem_cgroup_try_charge+0x32d/0x478 [<ffffffff810aac67>] mem_cgroup_charge_common+0x48/0x73 [<ffffffff81081c98>] ? __lru_cache_add+0x60/0x62 [<ffffffff810aadc3>] mem_cgroup_newpage_charge+0x3b/0x4a [<ffffffff8108ec38>] handle_mm_fault+0x305/0x8cf [<ffffffff813c6276>] ? schedule+0x6ae/0x6fb [<ffffffff8101f568>] do_page_fault+0x214/0x22b [<ffffffff813c7e1f>] page_fault+0x1f/0x30 At this point, the apache memory cgroup is practically deadlocked, and burning CPU in syscalls (all with the above call trace). This seems like a problem in the cgroup implementation...

    Read the article

  • Yahoo is sending our server's transactional email to the Spam folder, even though we have set up SPF and DKIM

    - by Derrick Miller
    Yahoo Mail is sending our server's transactional emails to the Spam folder, even though we have taken quite a few anti-spam steps. By contrast, Gmail allows the messages through to the inbox just fine. Here are the things which are in place: SPF is set up for the domain holsteinplaza.com. Yahoo reports spf=pass in the message headers. DKIM is set up for the domain holsteinplaza.com. Yahoo reports dkim=pass in the message headers. We have a proper reverse DNS entry for the sending mail server. Name - IP matches IP - Name. Neither Domainkeys nor SenderID are set up. From what I can tell, DKIM is the way of the future, and there is not much to be gained from adding Domainkeys or SenderID. Following are the headers. Any ideas what more I should do to get Yahoo to stop flagging the emails as spam? From Holstein Plaza Auctions Sat Jun 25 18:30:08 2011 X-Apparently-To: [email protected] via 98.138.90.132; Sat, 25 Jun 2011 18:30:11 -0700 Return-Path: <[email protected]> X-YahooFilteredBulk: 70.32.113.42 Received-SPF: pass (domain of holsteinplaza.com designates 70.32.113.42 as permitted sender) X-YMailISG: i_vaA_QWLDuLOmXhDjUv3aBKJl5Un6EiP6Yk2m4yn3jeEuYK MkhpqIt9zDUbHARCwXrhl9pqjTANurGVca7gytSs.mryWVQcbWBx.DaItWRb VcyrIzwMzXKCSeu06H2a.cJ7HG5vJLJaKmHUUI_1ttXKn_Aegiu5yHvFX83R Lpth0witO9zfaKvOMaJV3LAxpIpFOydwvq1cqjZ8nURxQbxM3Cl.QW7MxxrC 09qLVn_D_xSdU94QdU22IsVmlaRHv.uU5dnIazu.KSkhKpYykDoZA2SH0SY4 JmTZj3LP8N926xXVDzYQ5K6QvKuJL5g0d9pYZx3KC59sgIu5oHlJ3Q15RdKb f3OJw0PR6oIyJ2yStVr8vfbDgOfj3qig03.Tw6g6MMNpv1G7Cuol4oJeUaYP xELxX6dHgBgCSuWMcbsrxbK4BIXcS2qhpMqYQ4Isk.XXyA8uvmFXyvgc1ds5 8jo0rW.Wsw.55Z.KTPaQ0gHXj0T3OGppYMELSJv1iuhPyyAnZpmq01CU0Qd5 CcRgdyW3HaqhmpXqJCS0Clo16zXA4HmAjR0tgIQrHRLc3D9N02AOzvmDgCb1 vCh0p00QeKVq8UNkcShPRxZFKi9khtkLhPBlXEKkhJ76zyDmHUxTY.dQHVVD 8D2hx7BxbqI9DINI8x5oR5Q8hYkZqHYQsmGNkaU77O2BnsEv5WxMEmzrBJ4Z h8zGCidgYPiZycZfnfaBp0Xb4tya2WMTN45W02JFcO1qq_UMJ9xPeqZhPEj. j9YvBAC8324GGF.c8eWcNB2VB34QHgTcVUl3.c0XUCuncls9Cyg4L7AoIdCi HvAklSzDDu9nW6732VEipV9FJ_JkDupDNQU2hfiPG.3OeF8GwTnVYnEn0EiZ aO0NCnZhXuLDcN3K7ml3846yRdASvzPFs9s4aJkzR0FkhVvptiMBEOdRkKdG wHWmvWpK4GTZpW4yU7CnKpW2MiWWn1MP0h_CCZFKs5.3mfmfPjPVIABN_RuU Q8ex5hdKnKlQiqK56LzcPRnYmNtrwdsUX9CYn9d6cPpXR_Bi5jrNJMNzdFvq lGO0CBT4QPe2V45U8PtpMitttuDA1cCvmyBPFswxNlL0jyX0a_W.vl0YW5.d HhDItpHhDxKRUscM28IR.exetq4QCzyM X-Originating-IP: [70.32.113.42] Authentication-Results: mta1267.mail.ac4.yahoo.com from=holsteinplaza.com; domainkeys=neutral (no sig); from=holsteinplaza.com; dkim=pass (ok) Received: from 127.0.0.1 (EHLO predator.axis80.com) (70.32.113.42) by mta1267.mail.ac4.yahoo.com with SMTP; Sat, 25 Jun 2011 18:30:11 -0700 Received: (qmail 1440 invoked by uid 48); 25 Jun 2011 21:30:09 -0400 To: [email protected] Subject: this is a test X-PHPMAILER-DKIM: phpmailer.worxware.com DKIM-Signature: v=1; a=rsa-sha1; q=dns/txt; l=203; s=auction; t=1309051808; c=relaxed/simple; h=From:To:Subject; d=holsteinplaza.com; [email protected]; z=From:=20Holstein=20Plaza=20Auctions=20<[email protected]> |To:[email protected] |Subject:=20this=20is=20a=20test; bh=B3Tw5AQb1va627KEoazuFEBZ0fg=; b=oQ5uFq+oekPTGhszyIritjuuIAi3qPNyeitu+aWMhdx3oC6O2j5hJsDFpK0sS5fms7QdnBkBcEzT0iekEvn9EfAdCkGZ2KrtEC0yv7QKQcrjXxy07GJpj9nq0LYbgOuPdw8mGvKxlRZ+jFBX0DRJm0xXFLkr+MEaILw7adHTCCM= Date: Sat, 25 Jun 2011 21:30:08 -0400 From: Holstein Plaza Auctions <[email protected]> Reply-to: Holstein Plaza Auctions <[email protected]> Message-ID: <[email protected]> X-Priority: 3 X-Mailer: PHPMailer 5.1 (phpmailer.sourceforge.net) MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset="iso-8859-1" Content-Length: 195

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >